1.Social Networks and Corporate Meeting Systems
The study of social network is a very hot and active topic. The best illustration is the wonderful book "Six degrees - The science of a connected age" by Duncan Watts. A very crude summary would be to say that the social graph (interaction) between people is a truly fascinating object, with many interesting properties.
My goal is twofold. First, I am interested with networks that are enriched with "time" information. More precisely, networks where edges are valued with the frequency of the contact. What I want to study is mostly the efficiency (latency) of information propagation. Second, I am interested in affiliation networks, which are networks that describe the interaction of groups of people (look at Mattieu Latapy's research page for more details). The combination of the two means that I want to understand "Corporate Meeting Systems" (the set of planned committees and scheduled meetings in a company) from an information propagation point of view.
The existence of the "small world" structure, which exhibit "logarithmic propagation" properties, which I believe is totally relevant to CMS.
The importance of information latency as part of the design of a good organizational architecture.
The existence of an obvious trade-off between the number of people that you meet, and the frequency/depth with which you meet them. This is nicely illustrated in Malcom Gladwell's famous book "The Tipping Point".
Publish a research article that extends Duncan Watts' results to time-valued networks, including affiliation networks.
Publish a management research article about CMS, which translates the scientific results into practical advise about how to use meetings more efficiently (from a "system viewpoint").
I have been working on this topic for a while, and I got really excited after writing my book last summer. I keep everyone informed about my progress on my blog on "Organizational Architecture" (in French). Now that I am starting to produce scientific results, I'll switch to a more traditional approach for relating my findings to the community.
2. Game Theoretical and Evolutionary Simulation
The second thread stands at the intersection of two interests of mine (for many years!):
The simulation of business models (such as competition, or process optimization)
The use of AI/learning techniques to produce self-adaptive optimizing algorithms.
It turns out that I have encountered many times a similar situation, while trying to use computer simulation to get insights on a business problem, which I can describe as follow:
Too many unknown parameters in the model (for example, S-curve that describe the elasticity of a given market)
Some parameters describe the "outside world" while some others describe the behavior of the "players"
The players' parameters may be divided into a few strategic parameters (high level, describe the objective of the player) and many "tactic" ones which are related to the former.
In such a situation, my approach is always the same:
Try a range of value for all external (market) parameters that I do not known (making an implicit Monte-Carlo simulation)
Play with the strategic parameters using a game-theoretical approach of building a matrix of strategies (for each player)
Use an evolutionary algorithm to find the optimal values for the tactical (linked) parameters.
This is not an original idea. For instance, it is used very nicely by Robert Axelrod in his book "The complexity of cooperation", where he uses a genetic algorithm approach to optimize competition strategies. However, it is not widely spread or understood either. The combination of game-theory, Monte-carlo simulation and evolutionary optimization is a powerful modeling/simulation approach that deserves some attention. I have coined the term GETS (Game-Theoretical and Evolutionary Simulation) to describe it.
Write a research paper that explains the principles of GETS.
Better characterization of equilibriums, from a game theory point of view (my current use of static Nash equilibriums is too crude for most problems which are dynamic).
Develop a library of Evolutionary algorithms.
Make this framework available as an open-source CLAIRE library
I have applied this approach to three families of problems so far:
- Competition of the three French cellular operator. A much earlier game-theoretical simulation was developed in 2000 to evaluate the competition between the possible four bidders to the UMTS licence.
- The SIFOA model : simulating business processes of a company (cf. next thread)
- Retail channel optimization.
You may find the summary of my ROADEF'07 talk here (in French), which describes GETS as a generic framework. I have also written a few pages in my blog, both on the general method and about the results on some of the afore-mentioned problems. However, a scientific paper is badly needed :).
3. Enterprise Modeling and Business Process Optimization
The theme here is to study the behavior and the performance of enterprise business processes from different angles. The heart of the topic is a modeling of activities according to business processes, hence there is a direct link to business process optimization. Since this is a very broad theme, I focus on one specific issue : the impact of organization and information flows on performance.
This is still a very generic theme, which has a lot of counterparts in management science (the theory of organization part) and behavioral sociology (the communication side). Therefore, there is a huge caveat : operations research and management sciences are not enough to tackle communication and organization issues. Using the decomposition of Bolman and Deal into structural, human resources, political and symbolical dimension of organizations, my work stands on the structural floor, in the sub-specialty of computational experiments. However, this being said, my opinion is that the structural component of enterprise organization is still a promising theme.
This is a topic originally from management science & operations research. However, taking the issue at a very global level makes it closer to economy than "classical" OR. For instance, a reference book on enterprise modeling is "Organizations » from J. March et H. Simon.
Propose a generic model of business process execution that incorporates a view on corporate organization, both from a structural and communication point of view. The difficulty here is to come up with a model that is simple enough to be explained and sophisticated enough to support the analysis of the intricate interactions between organization, management, communication and performance.
Build a workbench to explore and demonstrate the benefits of BP optimization techniques, such as Lean Six-Sigma. Lean Six-Sigma has become quite popular, and a lot has been written on lean management and six-sigma total quality management. Most books or texts explain how or why deploy such a method, but few try to explain why or when it actually works. It turns out that this is a non-trivial question. A longer-term objective is to write a textbook on Business Process Optimization.
Similarly, use this framework to evaluate the potential impact of modern communication technology on business process performance. This is another non-trivial, and related, question. Simply understanding why or when faster information propagation improves performance is interesting.
I started working on this topic two years ago. This has led to a research program called SIFOA, for "Simulation of Information Flows and Organizational Architecture". I have made some progresses (and implemented a first version of a simulator) which are related in my blog.
4. Biology of Distributed Information Systems
My last theme of research is about biomimetics, autonomic computing and information systems:
This theme of research is at the intersection of thee domains:
- OAI : Optimization of Application Integration.
OAI is the field of optimizing the quality of service in a business process-oriented IT infrastructure. The link with EAI is obvious ... and dated. Today, it would be smart to talk about an SOA architecture. The problem is the same, though : how does one optimize the quality of service, measured at a business process level, in a real-life information system (i.e., with failures, bursts, and so on).
I have looked at this problem with my operations' research culture, it is a beautiful problem : rich, complex and very relevant to real life operations.
You may look at my last published paper on this topic: Self-adaptive middleware: Supporting business process priorities and service level agreements • Advanced Engineering Informatics, Volume 19, Issue 3, 1 July 2005, Pages 199-211Caseau, Y
- Autonomic Computing.
Autonomic computing (AC) is the name given to a large-scale research initiative by IBM, on the following: mastering the ever increasing complexity requires to make IT systems « autonomous ». More precisely, « autonomic » is defined as the conjunction of four properties :
(1) self-configuring : the system adapts itself automatically and dynamically to the changes that occur in its environment. This requires a « declarative » configuration, with a statement of goals and not a description of means. A good example of such a declarative statement is the use of SLA as parameters. When something changes, the system tunes its internal parameters to keep satisfying its SLA policy.
(2) self-healing : the management of most incidents is done automatically. The discovery, diagnosis and repair of an incident are made by the system itself, which supposes a capacity to reason about itself. Therefore, such a system holds a model of its own behavior, as well as reasoning tools that are similar to so-called “expert systems”. This is often seen as the comeback of Artificial Intelligence, although with a rich model, simple and proven techniques are enough to produce an action plan from incident detection.
(3) self-optimizing : the system continuously monitors the state of its resources and optimizes their usage. One may see this as the generalization of load balancing mechanisms to the whole IT system. This requires a complex performance model, which may be used both in a reactive (balancing) and proactive (capacity planning) manner (cf. Chapter 8).
(4) self-protecting : the system protects itself form different attacks, both in a defensive manner, while controlling and checking accesses, and in a proactive manner though a constant search for intrusion.
(note: To start with an excellent introduction to « autonomic computing », one must read the article « The dawning of the autonomic computing era » from A.G. Ganek and T.A. Corbi, which may be found easily on the Web)
The interest of these properties and their relevance to the management of IT systems are self-evident. One may wonder, however, why a dedicated research initiative is called for, instead of leveraging on the constant progress of the associated fields of computer science. The foundation of the Autonomic Computing initiative is the belief that we have reached a complexity barrier: the management cost of IT infrastructure (installation, configuration, maintenance, …) has grown relentlessly with their size and power, it represents more than half of the total cost. Next generations of infrastructure will be even more complex and powerful. IBM’s thesis is that their management is possible only if it becomes automated. In the article that we just quoted, a wealth of statistics is given that shows the ever-increasing part of operational tasks, while at the same time the business reliance on IT system is equally growing, resulting into financial consequences of IT outage that are becoming disastrous.
- "Organic Operations for IT Services".
Organic Operations is another "coined expression" that represents the opposition between a "life sciences" and a "mechanical" vision of the Information System. In the later view, reliability is achieved through spare parts and fail-safe mechanisms. In the former view, resilience is obtained though a larger number of mechanisms, redundancy being one of them. The intuition of the need for an "organic model" came to me a few years ago, as a CIO, when I noticed that we had a number of system failures although we were using high-availability, upscale servers grouped in redundant clusters, with an incredible array of back-up/restore mechanisms. However, whenever one of these incidents occurred, we found a solution (luckily) through an alternate process (with or without functional equivalence). The ability, from the smartest production engineers with the highest experience and deep knowledge, to find a "contournement" (an alternate path for running the service) has never ceased to amaze me. The idea between "organic operations" was to find a model to capitalize this knowledge.
I have started a second blog (in English) to share my thoughts on these topics. There is an obvious link between the theme and the "complex systems" domain.
Pursue the OAI experiments and qualify which kind of company is a better target for self-adaptive middleware
Introduce the concept of alternate business process, that is the handling of exceptional situations with degraded-but-acceptable composition of alternate processes.