CV Détaillé

Here you can find out who I am. Well, also my CV, my publications

LinkedIn :  View Ghazi Bouselmi's profile on LinkedIn

Ghazi Bouselmi 

View Ghazi Bouselmi's profile on LinkedIn
Né le 3 Juin 1979 à Tunis
Nationalité Tunisienne



Date of Birth: 3 June 1979, Tunisia

Nationality: Tunisian, living in Netherlands since 2009

Work Status: Highly skilled migrant

Languages: Arabic, French, English


2004 – 2008

University Henry Poincare & LORIA, Nancy – France

PhD in computer science: “Contributions to Automatic Speech Recognition for Non-Native Speakers”

Supervised by: Pr. Jean-Paul Haton, Dr. Dominique Fohr and Dr. Irina Illina.

Entailed C++ development under Linux, Shell scripting, distributed computing.

2003 – 2004

University Henry Poincare, Nancy – France

Master degree in computer science: “Speaker segmentation and clustering of audio data”

Entailed C++ development under Linux, Shell scripting, Qt.

2000 – 2003

National School of Computer Sciences (ENSI), Tunisia

Engineer diploma, software engineering.

1998 – 2000

Preparatory Institute for Engineering Studies of Tunis (EPEIT), Tunisia

National entrance exam for engineering studies.


Science Baccalaureate, mathematics.


C/C++, optimization, MFC, multi-threading, TCP/IP, Linux, Shell scripting (bash, awk, sed ...), SQL, x86 Assembly


I have been fond of programming since an early age, my passion for development led me to undertake quite a few complex projects (among which a Pentium 4 Assembly compiler). I enjoy facing challenges and solving difficult problems by thinking outside of the box and finding unconventional solutions. I like research and development environments where I can express my creativity and have a certain degree of freedom to design and stir my tasks to the solutions I envision.


Accent Groupe BV, Amsterdam

2009 – Current

Software Developer.

My duties included maintaining, bugs troubleshooting, developing new features, and optimizing diverse aspects of the proprietary trading software of AccentGroupe. I worked on optimization of multi-threaded operations and application layer messages sending/queuing/processing, of intensive MFC drawings, of socket layer communication through application session visualization and data-stream merging (through a software proxy) as well as TCP/IP level parameters tuning. I have also developed data stream oriented compression method based on LZ algorithm. Furthermore, I have developed a memory manager that transparently supplants all of the C-Runtime memory functions in order to minimize the memory fragmentation that results from intensive memory usage and to enhance memory operations' performance. I have developed a scalable indexation library for large data files, used to select sub-data streams (select specific instruments data within large Euronext exchange day backups). I have also developed a Unix shell like library (telnet compatible) that emulates the Unix shell commands (ls, cat, mount, …) and exposes the OS file system as well as application specific internal variables through binding into virtual files, allowing the remote execution of programs in the OS file system, and exposing internal function calls through the execution of virtual files bindings. This library is used as a basic infrastructure for a machine cluster that allows fast and centrally controlled test setups. I have developed a C++ framework for automated trading strategies based on the data available in the proprietary trading software at AccentGroupe, allowing the implementation of complex automated strategies as well as automated testing of large number of strategies over the machine cluster mentioned above. I have also developed a FIX-Emulator (FIX 4.2) for the purpose of training junior traders. This emulator simulates the trading part of a stock-exchange, allowing our trading servers to transparently connect to it, insert/modify/cancel orders, and receive trades confirmation.

School of engineering ESSTIN, Nancy - France

September 2007 – June 2008

Teaching Assistant

The course entailed an initiation to the UML (Unified Modeling Language) in the context of OO programming (Object Oriented). The second part of the course concerned relational databases.

LORIA – Laboratoire lorrain de recherche en informatique et ses applications, Nancy - France

February 2007 – June 2007

Supervising Assistant

Assisting in the supervision of a Master thesis of computer science entitled “Automatic detection of the mother tongue of non-native speakers” (miss Marina Piat).

IUT Nancy Charlemagne, Nancy - France

September 2004 – June 2005

Teaching Assistant

The course involved initiation of students to the basics of C: variables, data structures, memory management, basic loops and control sequences.

LORIA – Laboratoire lorrain de recherche en informatique et ses applications, Nancy - France

March 2003 – August 2003

Engineer Traineeship : “Speaker segmentation of audio data”

The traineeship (under the supervision of Dominique Fohr and Odile Mella) consisted in implementing an automated approach for segmenting and clustering of audio data into speaker turns.


    The following optimizations imply investigating & experimenting & testing (and in most cases, involved personal motivation and self-initiative, without any assignment for solving these issues):

Optimization experience during my PhD:

  1. Optimization of GMM/HMM likelihood calculation: this work has been carried out during my PhD, and resulted in the publication of three articles: “Extended Partial Distance Elimination and Dynamic Gaussian Selection for Fast Likelihood Computation”, “Efficient Likelihood Evaluation and Dynamic Gaussian Selection for HMM-base Speech Recognition” and “Dynamic Gaussian Selection Technique For Speeding up HMM-Based Continuous Speech Recognition”. The main approach was a heuristic based on the properties of the calculations of GMM likelihood, whereby a part of the calculation is dismissed when it is estimated to be non significant. Further optimization was achieved by relying on the temporal correlation of speech observations through keeping track of the best calculations from one observation to the next. Please follow the links of the articles for more details.

Optimization experience during my work at AccentGroupe:

  1. MFC/CMap : automatic rehashing of the map based on upper/lower thresholds of the ratio [number-elements]/[hash-size]. This provided an automated way of reducing the discrepancy between number of contained elements and the size of the hash, especially for parts of the software where it is quite hard to estimate the number of elements that might be dealt with in advance.

  1. MFC/CMap : specific hash function for integers in specific usage cases: the default hash function offered by MFC will ignore the 4 least significant bits and thus will result in many close values being hashed into the same entry of the hash table. This turned out to be non desirable for some of our usages where we were dealing with integers (as keys) with narrow values ranges.

  2. Message pool optimizations: our software uses message/object pools shared among different objects & threads. A pools is responsible of creating objects/messages, and keeping track of their life time (alleviating this responsibility from other parts of the software). Since pools are shared among all parts of the software (and among different threads), they proved to be quite a significant congestion point. I modified their architecture through using a static array of "sub-lists" (lists of available objects). The storage of the available objects is divided "equally" into these sub-lists. The access of each list is protected by its own critical-section. When an object is required from the pools, the latter will loop through its sub-lists of objects:

    1. first trying to lock the critical section (Win32API::TryEnterCriticalSection), if it succeeds, it will deliver the top object of the list (if there is any), if not (according to a policy) either it will create a new object or try to look into the following sub-lists.

    2. if none of the critical sections was successfully locked (or none of the lists contains any object), the algorithm will loop a second time, locking the critical-sections without a time-out this time.

    3. it is interesting to note that looping does not start at the index 0, but rather with a randomized starting point, as to minimize the possible thread congestion.

    4. the returning of free objects to the pool follows similar behavior as upper.

  3. TCP/IP : optimizations based on fine-tuning “tcp_nodelay” feature (which allows a small data size to be sent immediately instead of the default TCP behavior that wait for the sending buffer to reach a minimum size or for a timeout) & TCP-window-size (especially related to the communication with our remote office, where a small TCP-window-size meant quite some time lost waiting for acknowledgment from both sides of the TCP connection).

  1. TCP/IP : changed the framework from using asynchronous sockets to synchronous ones (better responsiveness).

  1. TCP/IP & network sessions : our software is designed with several socket sessions connecting a client to one of our servers. Each socket is dedicated for one kind of stream (transactions/trading, exchange feed/book & public data, etc …). In order to optimize the network bandwidth usage for our remote office in Cyprus (where clients need to connect to Amsterdam), I worked on a proxy project where the clients would connect to a separated proxy application running in the remote site (instead of directly to the server), which is in turn connected to a proxy application located in Amsterdam. The idea is that a big part of the data received by the clients (both instrument public data & transactions/trading reports) is duplicated for each client. the proxy would keep track of the most recently sent messages to the clients on the remote site, sends new messages only once, and just sends references if the same message would need to be sent to another client.

  1. TCP/IP & virtual sessions : this part is related to alleviating the costs in terms of resources and threads needed for each session connection (as part of connection to one server). The project involved the implementation of virtual session classes, that inherit from the main session class, but rather than allocating resources and connecting to the server, it relies on a parent session multiplexer (that actually have a connection to the server) to forward the messages (in/out) to the other side. This architectures allowed a non-negligible economy in resources (including CPU usage), and also a means of prioritizing the streams at the session-multiplexer (the trading stream is given a higher priority). These changes to the session objects were transparent to any object using it.

  1. Memory management: few years ago we ran into various issues related to the bad memory behavior of our software: regularly, our servers would crash during the day after some time running due to the memory being too intensively used (alloc/realloc/free). This results in the memory managed by the C-Runtime being too fragmented, and our software not able to allocate memory despite that 100s of megabytes were theoretically available (but too fragmented for any useful purpose). The solution to this problem was two folds:

    1. minimize the reallocation, especially in arrays, estimating a size that our software would use to run comfortably, allocating it at the creation of the object using it, disabling the shrinkage of the array size, and growing as needed (in quite large chunks).

    2. a memory manager based on the library provided by Microsoft (MS-Detours), I "detoured" (re-implemented) the C-Runtime memory functions (malloc, free, realloc, calloc etc ...) which are the basis of memory functionality, and which (as implemented in the visual studio C-Runtime) are all based on Windows-API heaps. Thus, (almost) all memory management calls would end up in private functions dedicated to offer the same functionality, but with focus on minimizing the memory fragmentation. This is done using higher level memory management functions (Windows-API VirtualAlloc/VirtualFree), and defining several memory heap types based on the granularity of the memory pointers that would be allocated inside of them (in our software, we use about 300 memory heap types, whose granularities range from 16 bytes to 512 megabytes). For each heap type, one or more heaps will be allocated/deallocated (in runtime), depending on the needs of the software. The size of the virtual memory allocation per heap is estimated at the end of the running of the software, and adapted for the next run. When the software requests an allocation of a piece of memory of a certain size, the allocation will be offered from a piece of memory inside the large virtual memory (allocated for each instantiated private "heap") of the corresponding granularity (the first granularity that is big enough to fit the request). Further optimizations have been done to avoid extensive memory operations (different policies related for ex. to when to move (or not) a reallocated pointer to a smaller granularity), searching the right heap type (using dichotomic search), embedding the heap object pointer (along with checksum validation bytes) into a header of the allocated pointer for very quick subsequent handling of the pointer (in subsequent free/realloc). The full memory manager solution is completely transparent to all parts of the program that use the memory functions.

      Earlier, it has been mentioned that "almost" all memory management functions would end up being handled by the private memory manager. Any part of the software that explicitly relies on different memory functions than the standard C ones (such as Windows-heaps, or Windows Virtual memory API) would be excluded. Besides, any DLL loaded by the software would not be affected by the private memory manager. We opted to not invest more effort in "Detouring" those functions since the current functionality is enough to solve most of our issues.

      It is worth noting that the actual "Detouring" of the memory functions is done during the initialization of the process, and even before the initialization of the C-Runtime library itself, by defining initialization functions and relying the MS-compiler/linker functionality in that regard. The memory manager, apart from minimizing the memory framgementation problem & preventing our software from crashing after some run-time, allowed an enhancement of the memory operations' speed of the order of 5 folds.

  1. Optimized SQL insertion of public trades records into MS-SQL database. The optimization consisted into changing the insertion as a per-record basis, into a bulk insertion (insert up to X number of records in one query, instead of X SQL-queries). This resulted in a dramatic reduction of the CPU-usage of our SQL server (32 core processor), and a dramatic reduction of the records insertion delay (from up to 2 hours delays in some instances down to about 1 second [which is the granularity at which our software collects the records and sends bulk queries]). A general solution was adopted in order to allow for different types of data/table structures.

  1. Created a library of synchronization offering the same functionality as Windows-API for HANDLEs, Events, Waitable-Timers, Thread-Creation & handling, Critical-Sections, Mutex(es), Semaphores, with the appropriate API alternatives for creation/destruction, locking/unlocking, waiting on objects, thread-sleep (up to micro-second precision in most cases). This library offers the possibility to operate using its own thread, or using the lifetime of (client) threads that are calling it. The library offers a private implementation of critical section (based on Interlocked-like API), and performs 5-10 times faster than windows API critical section and 2-2.5 times faster than STL's std::mutex, using a spin-wait for locking. The HANDLE-oriented operations are based on a FIFO-circular-buffer using similar techniques than the latter, and are as a result 10 to 100 times faster than their Windows-API counterparts (WaitForSingleObject, WaitForMultipleObject, …). It is worth noting that these operations and that circular buffer are thread-safe without using any mutex or critical section, the circular buffer itself is 2-5 times faster than a regular mutex-protected one. Beyond the speed improvement, in most usages and for the same algorithms, this library reduces the CPU usage as compared to windows API or STL's mutex. A simpler version (re-implemented in my personal time in c++11 style) is available here
  1. Optimized intensive MFC drawing: In our software, MFC drawing are the result of update objects/messages sent by our servers to the clients. The main idea of the work here is to minimize the GDI operation themselves, through accumulating the updates entailed by consecutive update messages and apply the final result once. Another aspect of this project is the responsiveness of the GUI, where the intensive drawings would render the software laggish and non responsive to user action. My contribution in this regard consisted in estimating the elapsed time during the views update (processing update messages), and halting it as soon as a maximum duration is reached, then schedule an update later. This schema incorporates a shared processing quota between all views needing to be updated, weighted by the number of messages that are queued for processing for each of them.

  1. Optimized messages sending from server to client: several changes for different types of messages. For example

    1. aggregating several data chunks into one message instead of a single chunk per message.

    2. implemented Ziff-Lempel compression, later we used standard zip libraries.

    3. implemented sending only a maximum number of public trades to the client (per instrument,side & price level), in order to avoid network congestion & server extensive resource usage, along with a maximum number of trade levels.

  2. Optimized the processing of exchange data feeds using multi-threaded processing. This project concerned the delivery & processing of feed messages, from the network/exchange layer of our feed server to the intermediary layer containing the stock-instruments objects. The previous version, where the messages were processed within one single thread belonging to the network/exchange layer, proved to be congested most of the time. Since the parsing of the messages is done within the network/exchange layer and results in object's method calls within the intermediary layer, the implementation of this project entailed the forwarding/marshaling of these method calls between the two layers. The intermediary layer, architectured around a multi-threaded structure, would then receive messages encapsulating the information needed for the correct marshaling of all involved method calls (method ID and parameters). It is interesting to note that, for performance reasons, the structure of the marshaling message is quite general and includes only a fixed number of static POD (plain old data, mainly integers). Furthermore, when non POD parameter needs to be marshaled, an object is created using an in-place constructor (placement new), and is then destroyed accordingly after being processed by the intermediary layer.

  3. Implemented a faster (and more accurate) microsecond relative-time-interval function relying on the assembly instruction "rdtsc" (and similar).

  1. Experimented with the processor affinity for threads, with the aim of minimizing processor switching & thread cache/data migration between processor cores (limiting affinity to 1 or 2 processors), without significant success.

Optimization experience as a personal hobby:

  1. In the early 2000, I participated in several seasons of an international assembly size coding competition where a certain task has to be coded in assembly with a minimal resulting size of the executable.
    Results can be found on the link under the pseudo “Tabchouri”.

  2. 3D rendering, Ray-Tracing & “Inifinte Detail” (under Linux): implemented a 3D ray-tracing framework, following in the foot-steps of the software developed by (infinite detail engine). The framework features a world of maximum size of [ 222 x 222 x 222 ] (equivalent to 1 kilometer cube, with a granularity of 64 pixels per millimeter cube, each pixel having color and transparency properties), and offering a modular architecture where references to (an) object(s) can be inserted (in the world) multiple times with different position/orientation & scaling factors, and offering several types of cameras (planar, cylindrical, spherical, ...). Although the world size is 266, the software can accommodate several thousands of objects (or copies) of sizes in the range of 212 to 215 (cubed) with a memory load of about 1.5 Gigabytes. The drawing procedure is multi-threaded. As an example, with 8 CPU cores, 10 thousand objects (spheres / cubes) of size 212 cubed, and a planar camera, the framework renders 8 to 10 frames per second (of resolution 800x600), without any hardware GPU acceleration.

  3. Mersenne-prime checking, using “Visual studio C++ AMP”: implemented a software that uses Lambda-calculus & GPU parallel computing (offered by C++ AMP), in order to check for Mersenne-primes (of the form 2P-1, where P is prime). Numbers are implemented as an array of digits in any base (ranging from 2 to 216), and digits can be either integers or floats. This software entailed the implementation of several basic algebraic operators (sum, subtraction, multiplication, division, modulo, binary shifting), as well as advanced operations (Fourrier transform, discrete cosine transform, exponentiation …). With an N-Vidia video card series 8, with 2GB of memory, the software performs about 100 times faster than a straightforward algorithm (for large enough number sizes, in the range of 100 digits in base 216).


  1. As a hobby, I developed a compiler for the Intel(r) Pentium 4 (r) assembly language, under Delphi 6. This software is able to compile any of the Pentium 4 instructions, and accepts a program structure similar to that of the languages Pascal and Delphi (libraries & imbricated functions), and generates DOS executables in the format .COM and .EXE.

    The code can be downloaded in

  2. As part of my duties in AccentGroupe, I implemented a FIX-Emulator, that simulates the behavior of the trading part of a stock-exchange server intended for training junior traders. This software offers the most common features of a stock-exchange, including the insertion/amending/canceling of orders (including limit/market price type, stop orders, day and good-till-cancel orders), reporting of trades, and matching the virtual orders against the real-time prices of underlying instruments (with a functionality of keeping track of virtual volume already traded from the public instrument books). The implementation of FIX-4.2 is transparent, allowing our servers to seamlessly connect to it as they would to a real stock-exchange.

  3. As part of my duties in AccentGroupe, I implemented an automated strategies framework that allows a relatively easy introduction of new automated strategies. A strategy is defined as C++ class, inheriting from a general purpose one, and can react to any event occurring on its underlying instrument (public book updates, single best change, new public trade, state change etc …). The framework features an automated approach for following profit & loss, with a (configurable) closing of a strategie's position when enough profit or too much loss is reached. A strategy can therefore be as simple as few lines of code encapsulating the desired behavior, but can also be as complex and embedding as intricate an algorithm as C++ would allow (statistical calculation, database aided decision making …) and can define as many extra parameters as needed.

  4. As part of my duties in AccentGroupe, I devised an indexation library for large streams of data backup, and the selection of sub-data-streams. This library was used for the indexation of large Euronext daily backup, and the selection of specific instruments data for the purpose of replaying few selected stocks.

  5. Later on at AccentGroupe, I have developed a Linux-like shell library, complete with several commands (ls, cat, mkdir, grep, mount, chmod, piping commands through '|', output redirection to files through '>', etc …) and “telnet”-compatible remote access through TCP/IP, allowing the access to the remote system drives/folders/files and remote launching of operating system applications, as well as exposing some of the application's internal variables for reading/modification (through virtual file bindings), and exposing some of the application's internal functions through execution of virtual files. These features were implemented in a similar manner as in Linux disk architecture, where different types of file-systems can be mounted and offered through the telnet connection : any folder on the remote machine's drive, or any point in the registry of the remote machine, or any virtual file system where the application can bind variables and/or function calls as virtual files.

  1. Using the previous three libraries, I implemented a software cluster for the automated back-testing of trading strategies. This cluster is setup using remote launching of applications (clients holding and running the strategies, servers running the replay of indexed Euronext daily backups), remote configuration of those applications (through the telnet), and the remote control of the ensemble through a centralized application (with the collection of the results remotely through “telnet”).

  2. During my work in AccentGroupe, and in relation to the private memory manager project, I implemented a library for an easy access to windows registry (as a means to hold the configuration of the memory manager). The library offers the automated loading of the full sub-tree of a windows registry entry point into a tree of objects (at the software side) and the creation/deletion of entries through the deletion/attachment of objects into the object-tree. The library also offers a functionality to define hierarchical registry patterns (that are of use to the software). A pattern is defined (as a class deriving from the general purpose one) through the minimal children-keys that it must have, with their names and values types. It is also possible to specify which patterns can be included as children of other patterns. The library offers an automated way of recognizing the patterns, on the fly, while reading them from registry or while modifying them (adding/removing children), and automatically replaces the objects entries accordingly within the object tree: replacing a general purpose registry-entry object into an object representing a certain pattern (or upgrading a pattern object into a more specialized one, requiring more children matching), or downgrading a pattern-object (when removing children from it) into a lower matching pattern-object (requiring less children) or a general purpose one (when no match is possible). Thus, any part of the software using this library can simply define the classes/patterns in order to load registry data, recognize the pattern automatically and access the polymorph object instances directly (relying on their Runtime types to recognize useful patterns).

    It is worth noting that since this library is mainly used by the private memory manager, and that this part of the code must be initialized before any other in the software, this registry library could not rely neither on the private memory manager nor on the C-Runtime memory manager for any memory operation. For this reason, I reimplemented the objects creation/destruction operators (new & delete) within this library to rely on Windows-Heaps-API (using one heap exclusively for this purpose).

PUBLICATIONS (click to download)

Extended Partial Distance Elimination and Dynamic Gaussian Selection for Fast Likelihood Computation

Interspeech'2008/Eurospeech. Brisbane, Australia – September 2008

Ghazi Bousselmi, Michael Jun Cai

Multi-Accent and Accent-Independent Non-Native Speech Recognition”

Interspeech'2008/Eurospeech. Brisbane, Australia – September 2008

Ghazi Bousselmi, Dominique Fohr, Irina Illina

Efficient Likelihood Evaluation and Dynamic Gaussian Selection for HMM-base Speech Recognition”

Computer Speech and Language - CSL – 2008

Ghazi Bousselmi, Michael Jun Cai, Yves Laprie, Jean-Paul Haton

Dynamic Gaussian Selection Technique For Speeding up HMM-Based Continuous Speech Recognition”

IEEE International Conference on Acoustics, Speech, and Signal Processing - ICASSP 2008. Toulouse, France – May 2008

Ghazi Bousselmi, Michael Jun Cai, Yves Laprie, Jean-Paul Haton

Combined Acoustic and Pronunciation Modelling for Non-Native Speech Recognition”

Interspeech'2007/Eurospeech. Anwerp, Belgium – August 2007

Ghazi Bousselmi, Dominique Fohr, Irina Illina

Détection de la langue maternelle de locuteurs non natifs fondée sur l'extraction de séquences discriminantes de phonèmes”

Traitement et Analyse de l'Information : Méthodes et Applications - TAIMA'07. Hammamet, Tunisia – May 2007

Ghazi Bousselmi, Dominique Fohr, Irina Illina, Jean-Paul Haton

Discriminative Phoneme Sequences Extraction for Non-Native Speaker's Origin Classification”

International Symposium on Signal Processing and its Applications - ISSPA 2007. Sharjah, Emirats Arabe Unis (UAE) – February 2007

Ghazi Bousselmi, Dominique Fohr, Irina Illina, Jean-Paul Haton

Multilingual Non-Native Speech Recognition using Phonetic Confusion-Based Acoustic Model Modification and Graphemic Constraints”

The Ninth International Conference on Spoken Language Processing - ICSLP 2006. Pittsburgh, PA/USA – September 2006

Ghazi Bousselmi, Dominique Fohr, Irina Illina, Jean-Paul Haton

Reconnaissance de parole non native fondée sur l'utilisation de confusion phonétique et de contraintes graphèmiques”

XXVIes Journées d'Etude sur la Parole - JEP'06. Saint-Malo, France – June 2006

Ghazi Bousselmi, Dominique Fohr, Irina Illina, Jean-Paul Haton

Towards Speaker and Environmental Robustness in ASR: The HIWIRE Project”

IEEE International Conference on Acoustics, Speech, and Signal Processing - ICASSP 2006. Toulouse, France – May 2006

Ghazi Bousselmi, A. Potamianos, D. Dimitriadis, Dominique Fohr, R. Gemello, Irina Illina, F. Mana, P. Maragos, M. Matassoni, V. Pitsikalis, J. Ramirez, E. Sanchez-Soto, J. Segura, P. Svaizer

Fully Automated Non-Native Speech Recognition Using Confusion-Based Acoustic Model Integration And Graphemic Constraints”

IEEE International Conference on Acoustics, Speech, and Signal Processing - ICASSP 2006. Toulouse, France – May 2006

Ghazi Bousselmi, Dominique Fohr, Irina Illina, Jean-Paul Haton

Fully Automated Non-Native Speech Recognition Using Confusion-Based Acoustic Model Integration”

Interspeech'2005 - Eurospeech — 9th European Conference on Speech Communication and Technology. Lisbona, Portugal – September 2005

Ghazi Bousselmi, Dominique Fohr, Irina Illina, Jean-Paul Haton

Multilingual Recognition of Non-Native Speech using Acoustic Model Transformation and Pronunciation Modeling

International Journal of Speech Technology, Springer Verlag, 2012

Ghazi Bousselmi, Dominique Fohr, Irina Illina


(as found on

Dejan Misic
Financial Software Engineer (Freelance)

I've had a chance to collaborate with Ghazi on several important trading software         projects, and was very impressed by his ability to quickly grasp complex problem           domain and offer architecturally strong software implementation, combining it with             pragmatic solutions where necessary. His expertise in C++ and assembler, thorough understanding of intricacies of operating systems, dedication and willingness to go                   till the bottom of specific problem, passion for software design, and ability to juggle             efficiently between multiple big projects make him a de facto expert developer

November 7, 2011, Dejan worked directly with Ghazi at Accent Pointe BV

Khaled Abda
Sr. Software Engineer at thinkstep

I studied with Ghazi at IPEIT & ENSI, he was always among the most brilliant             students, both his classmates and teachers were impressed by his knowledge           especially in computer science as well as his way of thinking and problem solving...

October 30, 2010, Khaled studied with Ghazi at ENSI

Wassim BenSlimane

IT POS, E-Commerce Applications, DMS Manager at Ooredoo Tunisia CSM®

Ghazi is a clever and creative person in several computer engineering fields. Very         efficient and meticulous in his work. I highly recommend Ghazi to any prospective     employers who are looking to enhance their team with good professional personel.

June 26, 2009, Wassim studied with Ghazi at ENSI

Haithem Sfaxi

I had the chance to study with ghazi for several years but I couldn’t stop being         astonished by his hard working skills and his smartness that lead him to be able to             face all kind of challenges he met.

December 29, 2008, Haithem studied with Ghazi at ENSI

Ilyes Gouta
Senior Software Designer II, Project Leader at STMicroelectronics

Well, simply put, Ghazi is a genius. He's able to work out very heavy and challenging problems by decomposing them mentally into smaller pieces and linking all the parts             to get to a solution in a very short time. I had the opportunity to exchange some           thoughts with him during my engineering studies and I still keep that feeling until               today, that I have met a very rare and amazing person.

September 15, 2008, Ilyes studied with Ghazi at ENSI

Houssem BDIOUI
Senior Software Engineer / Rayen Soft manager

I had the chance to cary with Ghazi several academic projects (typically softwares).                 I must say that I learned a lot from this smart and hard-working friend. And indeed,       everyone was astonished by Ghazi skills.

September 8, 2008, Houssem studied with Ghazi at ENSI