The Basel III regulatory framework, as planned, will not reduce systemic risk in the financial sector, according to new research. Instead, regulations should aim to increase the resilience of financial networks. Current regulations aimed at reducing risk of crisis in the financial sector will not effectively reduce that risk, according to new research from the International Institute for Applied Systems Analysis (IIASA), published in the Journal of Economic Dynamics and Control. Introducing regulations that aim to increase the system network resilience would be more effective, the study shows. The Basel III framework is a new set of international banking regulations that were proposed after the financial crisis in 200708, with the goal of reducing the risk of a future banking crises. The regulations, which are currently under intense discussion, would set higher requirements for bank capital and liquidity reserves and introduce capital surcharges for systemically important banks—those that are “too big to fail.” The aim of the Basel III regulations is to reduce the risk of systemwide shocks in the financial sector. However, the study shows that the capital surcharges would have to be much higher that currently set in order to be effective, and that would lead to a severe loss of efficiency in the financial system. “The recent financial crisis clearly indicates that a resilient banking sector in terms of underlying financial networks is a necessary condition for achieving sustained economic growth. It is therefore essential that Basel III, the upcoming international regulatory framework for banks, really address the problem of systemic risk in the financial system,” says IIASA researcher Sebastian Poledna, who led the study. The research is based on a stateoftheart agentbased model of a financial system and the real economy. In particular, the study focused on banks that are “too big to fail,” known as globally systemically important banks (GSIBs). Using the model, the researchers ran a series of experiments simulating different types of regulations and their impacts on the financial system risk and resilience. Replacing the currently proposed Basel III regulations with different regulations that aim to restructure financial networks would be much more effective in increasing resilience while avoiding the loss of efficiency in markets, according to the study. Such methods could include smart transaction taxes based on the level of systemic risk, which the researchers proposed in a recent study, in order to reshape the topology of financial networks. "The new regulation scheme Basel III claims to explicitly address systemic risk. We were surprised to find how little it really does so under realistic scenarios. The study highlights how important datadriven agentbased modeling has become as a tool to help us identify unintended consequences of regulations and propose more effective solutions,” says IIASA researcher Stefan Thurner, who coauthored the study. “The international banking system is complex and intricately connected,” explains Poledna. “In order to make intelligent regulations, it’s important to analyze how regulations will affect financial networks from a systemic perspective.” Reference Poledna S, Bochmann O, & Thurner S (2017). Basel III capital surcharges for GSIBs are far less effective in managing systemic risk in comparison to networkbased, systemic riskdependent financial transaction taxes. Journal of Economic Dynamics and Control 77: 230246. DOI:10.1016/j.jedc.2017.02.004. Katherine Leitzell Presse und ÖffentlichkeitsarbeitInternational Institute for Applied Systems Analysis (IIASA) 
Complex Systems Lab is a nonprofit research group. The website serves as a news aggregator and collaboration tool for individuals from different organizations around the world. Everyone in the fiels of complex systems research is free to propose projects and to join the group. 
Jeff Johnson from Open University talks in this video about the importance of the Science of Complex Systems as a means to unravel realworld, multidimensional problems that cannot be tackled by conventional reductive methods. Using Bird Flu as an example he demonstrates how complex systems thinking allows researchers to approach problems in novel ways. To find out more follow the research links. 
In order to harvest behavioral patterns of market participants Doyne Farmer came up with the idea to use an agent based economic model in a gaming mode. The idea is to replace some or all of the artificial agents by real people. In a "wargaming style" user of a multiplayer game will be able to develop an intuition for consequences of individual economic decisions in a dynamic interconnected gaming environment. The data can be used to understand human decision making. Behavioral information can be useful in calibrating econometric agent based models. Clearly, it will be quite challenging to design a game engaging enough to draw the attention of the gaming community. Also there are serious doubts that the data can be used for a real world model, because behavioral patterns in a game environment seem to be quite different to behavior in reality. Nevertheless, I think the idea is interesting enough to give it a try. Below I will outline some architectural considerations of such a game. If you are just interested in playing the game you can go right here: http://game.ComplexLab.org. Keep in mind that this is a first prototype. It will be evolving constantly and there is no guaranty of service. Architecture of Multiplayer GamesAt some point of the game the outcome of individual actions need to be synchronized among different computers/processes. In general, there are two common architectures for resolving arbitration: (1) clientserver and (2) peertopeer. Clientserver is conceptual simpler, easier to implement and better suited to generate game data. ClientServerEach user of the game and each artificial agent runs a local client program. The client programs are connected to a central machine  the server program. The server program is maintaining the state of the game and is broadcasting this information to the individual clients. This design makes the server the bottleneck, both computational and bandwidthwise and it may turn out to be a serious scaling problem. On the other hand it is easy to maintain game state and access control. The minimal example of a client program is a terminal. It transmits user inputs to the server and reports server messages to the user. The main loop of the client program would look like this:
Realtime Push NotificationAjax Push Engine (APE) is an open source realtime push notification system for streaming data to any browser using web standards only (Asynchronous JavaScript and XML (AJAX)). It includes a comet server and a Javascript Framework. Here is a demo that shows how APE can handle massive multiuser moving on a web page in realtime.PeertoPeer In a peartopear system (P2P) each user of the game runs the same peer program, or at least groups of users run the same program. The peer program maintains the local state e.g. the position of the player. When moving, the peer program is also responsible for collision avoidance. Therefore, peers need to broadcast their state to other peers. A minimalistic example of a peer loop would look like this:
There are several issues with this type of architecture that need to be adressed:
Possible Gaming ScenariosFarmer at all [1] proposed a number of gaming scenarios:
To be continued. 
Entropy and InformationInformation is reduction in uncertainty and has nothing to do with knowledge. Imagine you are about to observe the outcome of a coin flip and the outcome of a die roll. Which event contains more information? After Abramson, the information contained in the outcome of an event E with probability P(E) is equal to log 1/P(E) bits of information. For the unit bits we use log base 2. The result of a fair coin flip we get (log 2 = 1 bit) and for the die roll (log 6 2.585 bits). EntropyNow imagine a zeromemory information source X. The source emits symbols from an alphabet {x_1, x_2, . . . , x_k} with probabilities {p_1, p_2, . . . , p_k}. The symbols emitted are statistically independent. What is the average amount of information in observing the output of the source X?Shannon formulated the most fundamental notion in information theory for a discrete random variable, taking values from $\mathcal{X}$. The entropy of X is Proposition
InterpretationH[X] measures:
“paleface” problem Description LengthH[X] = how concisely can we describe X? Imagine X as text message:
Known and finite number of possible messages (#X). I know what X is but won’t show it to you. You can guess it by asking yes/no (binary) questions First goal: ask as few questions as possible
New goal: minimize the mean number of questions
Theorem: H[X] is the minimum mean number of binary distinctions needed to describe X. (Units of H[X] are bits) Multiple VariablesJoint entropy of two variables X and Y: Entropy of joint distribution: This is the minimum mean length to describe both X and Y
Entropy and Ergodicity(Dynamical systems as information sources, longrun randomness) Relative Entropy and Statistics(The connection to statistics) ReferencesCover and Thomas (1991) is the best single book on information theory. TutorialsConferencesResearch Groups

There are many views on what is emergence. At the same time it is one of the most seductive buzzwords in complex systems science. This summary is based on a talk from Robert MacKay, University of Warwick. It explains emergence as a property of a nonlinear dynamical system, as nonunique statistical behavior without any topological reason. History
Weak vs. Strong emergenceChalmers (2002), Strong and Weak Emergence:
Q: Will something not deductible ever happen? It is unexpected to whom? Dynamic Systems ViewWhat emerges from a spatially extended dynamical system are probability distributions over spacetime histories (space time phase) that arise from typical initial probabilities in the distant past. The amount of emergence is the "distance" of a spacetime phase from the set of products for independent units. Strong emergence means nonunique spacetime phase (but not due to decomposability). Examples:
Decomposability (we will not allow strong emergence to arrise from this):
Proved examples of strong emergence:
Crutchfield nontrivial collective behavior
Resources

This is a short Bayesian analysis tutorial developed around the following problem: Consider the following dataset, which is a time series of recorded coal mining disasters in the UK from 1851 to 1962 [Jarrett, 1979]. The first step in the analysis is the Bayesian model building. We assume that the occurrences of the disasters can be modelled as a poisson process. Our hypothesis is, that at some point in time the mining process switched to a new technology, resulting in a lower rate of disasters in the later part of the time series. We define the following variables for the Bayesian stochastic model:
The model can then be defined as where D is dependent on s, e, and l. In a Bayesian network the probabilistic variables s, e, and l are considered as 'parent' nodes od D and D is a 'child' node of s, e, and l. Similarly, the 'parents' of s are t_l and t_h, and s is the 'child' of t_l and t_h.
The nest step is fitting the probability model (linked collection of probabilistic variable) to the recorded mining disaster time series. This means we are trying to represent the posterior distribution. Markov chain Monte Carlo (MCMC) algorithm [Gamerman 1997] is the method of choice. In this case we represent the posterior p(s,e,lD) by a set of joint samples of it. The MCMC sampler produces these samples by randomly updating the values of s, e and l for a number of iterations. This updating algorithm is called MetropolisHasting [Gelman et al. 2004]. With a reasonable large number of samples, the MCMC distribution of s, e and l converges to the stationary distribution. There are many generalpurpose MCMC packages. Here I use PyMC  a python module created by David Huard and Anand Patil and Chris Fonnesbeck. I prefer a programming interface to stand alone programs like WinBugs for its flexibility. It uses high performance numerical libraries like numpy and optimized Fortran routines. The following code imports the model, instantiates the MCMC object and run the sampler algorithm: >>> from pymc.examples import DisasterModel >>> from pymc import MCMC >>> M = MCMC(DisasterModel) >>> M.isample(iter=10000, burn=1000, thin=10) Below are the 900 samples (left) of variable s  the year in which the rate parameter changed. The histogram (right) with a mean at 40 and 95% HPD interval (35, 44). Next, the 900 samples (left) of variable e  the early rate parameter prior to s. The histogram (right) with a mean at 3.0 and 95% HPD interval (2.5, 3.7). Finally, the 900 samples (left) of variable l  the late rate parameter posterior to s. The histogram (right) with a mean at 0.9 and 95% HPD interval (0.7, 1.2).

Computational Mechanics is an extension of the approaches typically found in statistical mechanics. It is a technique to describe both temporal and spacial organization in systems. It is a more detailed structural analysis of behavior than those captured solely in terms of probability and degrees of randomness. It complements and augments more traditional techniques in statistical physics by describing the structural regularies inherent in the system or data. Beyond focusing on measures of disorder, computational mechanics aims to detect and quantify structure in natural processes. Natural information processing mechanisms can be represented and analyzed in a diverse set of model classes. Contemporary notions of computation and of useful information processing must be extended in order to be useful within natural processes and their instantiation in multiagent systems. This is because natural processes are systems that can be spatially extended, continuous, stochastic, or even some combination of these and other characteristics fall often outside the scope of discrete computation theory. Computational mechanics attempts to construct the minimal model capable of statistically reproducing the observed data. This is called an εmachine or causal state machine. This approach leads one naturally to address the issue of the information processing capability of the system. There is an vast amount of literature on computational mechanics. This includes both, theoretical developments as well as system applications. Jim Crutchfield maintains a rather exhaustive page on computational mechanics called the Computational Mechanics Archive, and almost all papers of interest are listed there. εmachine (causal state machine)The εmachine is a mathematical object that embodies the Occam's Razor principle in that its description of the process has minimal statistical complexity subject to the constraint of maximally accurate prediction. The εmachine provides
εmachine reconstruction The problem is inferring εmachines from limited data is called εmachine reconstruction. The focus is on developing inference algorithms (statemerging versus statecreating) and accompanying statistical error theories, indicatiing how much data and computing time is required to attain a given level of confidence in an εmachine with an finite number of causal states.causalityCausality is defined in temporal terms: Something causes something else by preceding it. A sequence of causal states has the property that causal states at different times are independent, conditioned on the intermediate causal states. This is a Markov chain: In knowing the whole history of the process up to a point has no more predictive capacity than knowing the causal state it's in at that point. The causal structure is the equivalence relation causal states establish among different histories which have the same conditional distribution of future observables and the symbol emitting state transition probability. A directed graph where states are represented as nodes and edges labeled with emitted symbols and transition probabilities represent stete transitions can be used to visualize the causal information in the εmachine.type of raw data for inferring causality (suitable processes)Processes best captured by this method are sequences of a symbolic logic character, with complex pattern and an unknown generative mechanism, such as an extremelylong string on nottotallyrandom 1's and 0's. Processes are assumed to be stationary, however online reconstruction can relax this constraint. Algorithmic complexity of the reconstruction increases exponentially with the size of the symbol set. The even process is reconstructed (using CSSR) from sequences of 10000 samples with correct causal states and transition probabilities within 3% of the correct probabilities. The even process is a process which generates a string of 1's and 0's according to the rule that after a 0 is emitted, there is a coin flip to see whether a 1 or 0 follows. After an odd number of 1's being emitted, the process must emit a 1, but after an even number of 1's it is back to the coin flip. Randomness is embodied in the minimally stochastic transitions form one state to another. The given state can lead to many states, but a given symbol leads to at most one of those states.limitationsUnlike for graphical causal models a "factor" is not a meaningful term for the definition of an εmachine. In computational mechanics, the causal state definition sidesteps this, because it doesn't matter what determines the causal state to be a causal state, it just depends on the future morph of a given history.Pearl introduces the notion of "stability" (robustness of independence patterns) is not satisfied in εmachine reconstruction. This is when the model's structure can retain its independence pattern in the face of changing parameters. It is unclear in this context what the "parameters" actually are.

With special thanks to Thomas T. Jensen from sumthings we use an image that is part of a project on emergent architecture. As the name suggests, "sumthings" refers to the theory of emergence and to the application of the bottomup approach in architectural design. Check out there website and you will find an impressive collection of designs following the principle of holism as it was first summarized by Aristotle in the famous quote: "The whole is more than the sum of its parts". This particular project from Thomas T. Jensen and David O. Wolthers is called pravent. They designed a very basic component with a lowlevel property of being able to attach to other instances of the same component. This is translated into highlevel properties of the resulting fractal structure, which are rotation, scaling and selfsimilarity. The result looks much better than the individual components. Indeed, this stunning design is the new emergent property that does not exist in the parts. This principle is such a universal one, that we can find it everywhere and on all scales. It translates molecules into organisms, particle into matter, individuals into societies, etc. This are the systems we are studying in Complex Systems Lab in order to predict macroscopic properties or infer lowlevel designs. It becomes more and more important for the design of man made systems. This includes not only physical designs like architecture but also for software, control and logistic. Architecture in this context goes much beyond the design of buildings. 
The study of complex networks is a more resent activity in complex systems science. It is largely inspired by empirical studies of real world networks like social, gene, neuronal, and computer networks. A complex network is essentially a graph with "nontrivial" topological features. Let's first review some trivial features for regular or random networks, as they are known from graph theory. Trivial Network FeaturesTrivial networks such as lattices and random graphs have been studies extensively in the past. Lattices are regular graphs whose drawing corresponds to some grid/mesh/lattice. A random graph is a graph that is generated by a random process. Some of the more prominent graph properties will be discussed. Graph properties or invariants depend on the abstract structure only, not on the representation. They include degree, clustering coefficient, connectivity (scalars); degree sequence, characteristic polynomial, Tutte polynominal (sequences/polynominals). Degree / Degree DistributionThe degree of a node is the number of connections it has to other nodes. There is a maximum degree, a minimum degree and a degree sum of a graph. A vertex with degree 0 is called an isolated vertex, a vertex with degree 1 is called a leaf vertex and the edge connected to that vertex is called a pendant edge. The degree distribution is the probability distribution of these degrees over the whole network. This is an important property of both real world networks and theoretical networks. For example the random graph has a binomial distribution of degrees or a Poisson distribution for lage networks. Most real world networks are highly rightskewed, meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree (see scalefreenetworks below). The degree sequence is a nonincreasing sequence of vertex degrees. Graphs with the same degree sequence are isomorphic. Isomorphism captures the idea that the structure of the graphs is the same  ignoring individual distinctions, e.g. the number of cycles in the isomorphic graphs is the same, however node coloring and graph drawings may differ. Clustering CoefficientThe clustering coefficient assesses the degree to which nodes tend to cluster together, e.g. in social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties  greater than the average probability of a tie randomly established between two nodes. There is a global and a local version of this measure. The global clustering coefficient gives an overall indication of the clustering in the network, whereas the local one gives an indication of the embeddedness of single nodes. The global clustering coefficient is based on triplets of nodes  three nodes that are connected by either two (open triplet) or three (closed triplet) undirected ties. It is calculated as the ratio of the number of closed triplets and the total number of triplets (both open and closed). The local clustering coefficient of a node quantifies how close its neighbors are to being a clique (complete graph). It is used to determine whether a graph is a smallworld network (below). The local clustering coefficient for a node is then given by the proportion of links between the vertices within its neighbourhood divided by the number of links that could possibly exist between them. The measure is 1 if every neighbour connected to the node is also connected to every other vertex within the neighbourhood, and 0 if no vertex that is connected to the node connects to any other vertex that is connected to the node. ConnectivityConnectivity of a graph is a measure of robustness as a network. Two vertices are connected, if there is a path between them. Otherwise they are disconnected. A graph is connected if every pair of vertices is connected. Vertex connectivity is the smallest vertex cut of a connected graph  the number of vertexes that must be removed in order to disconnect the graph. Local connectivity is the size of a smallest vertex cut separating two vertexes in a graph. Graph connectivity equals the minimal local connectivity. Edge connectivity is the smallest number of edge cuts that renders the graph disconnected. Local edge connectivity is the smallest number of edge cuts, that disconnect the two edges. Again, the smallest local edge connectivity in a graph is the graph egde connectivity. The vertex and edgeconnectivities of a disconnected graph are both 0. The complete graph on n vertices has edgeconnectivity equal to n − 1. Every other simple graph on n vertices has strictly smaller edgeconnectivity. In a tree, the local edgeconnectivity between every pair of vertices is 1. Edge connectivity is bounded by vertex connectivity and the latter is bounded by the minimum degree of the graph. Nontrivial Network FeaturesMost real world networks show topological features like a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure, and hierarchical structure. AssortativityAssortativity refers to a preference of a node to attach to other nodes that are similar or different. Similarity is often but not necessarily expressed in terms of a nodes degree, i.e. in social networks, highly connected nodes prefer to attach to other highly connected nodes (assortativity). The oposite effect can be observed in technological and biological networks, where highly connected nodes tend to attach to low degree nodes (dissortativity). Assortativity is measured as correlation between two nodes. The most common measures are assortativity coefficient and neighbor connectivity. The assortativity coefficient is a Pearson correlation coefficient between pais of nodes. The range of the measure r is from 1 for perfect assortative mixing patterns to 1 for completely disassortative. Scalefree networksIn scalefree networks the degree distribution follows the power law. The degree distribution of these networks has no characteristic scale  some vertices have a degree that is orders of magnitude larger than the average (hubs). Networks exhibiting such a distribution are very different from networks, where edges existed independently and at random (Poisson process). Scalefree networks can be created with the Yule process (Gibrat principle, Matthew effect, cumulative advantage, preferential attachement).
These networks show a high robustness to the random vertex cut, i.e., the vast majority of vertices remain connected together in a giant component. However, they are clearly sensible to targeted attacks on hubs. Also, these critical vertices are the ones with the highest degree, and have thus been implicated in the spread of disease (natural and artificial) in social and communication networks, and in the spread of fads. The detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which powerlaw behavior holds. Commonly used methods for analyzing powerlaw data, such as leastsquares fitting, can produce substantially inaccurate estimates of parameters for powerlaw distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Finally, A. Clauset at all. [1] got there brilliant paper published, where they present a principled statistical framework for discerning and quantifying powerlaw behavior in empirical data. The approach combines maximumlikelihood fitting methods with goodnessoffit tests based on the Kolmogorov–Smirnov (KS) statistic and likelihood ratios. The linked page hosts implementations of the methods described in the article and this site contains the 24 data sets.
[1] A. Clauset, C. R. Shalizi, and M. E. J. Newman. Powerlaw distributions in empirical data. SIAM Review, 51(4):661–703, 2009. Smallworld networksA smallworld network (in analogy to six degrees of separation phenomenon) is a regular graph with the addition of only a small number of longrange links. Even if the diameter of a regular graph is proportional to the size of the network, it can be transformed into a "small world" in which the average number of edges between any two vertices is very small. In smallworld networks the diameter should grow as the logarithm of the size of the network, while the clustering coefficient stays large. A wide variety of abstract graphs exhibit the smallworld property, e.g., random graphs and scalefree networks. Real world networks such as the World Wide Web and the metabolic network also exhibit this property. Network Data SetsThe SNAP project on Stanford University maintains a collection of network data sets. Here is a subset of it.
Social Networksonline social networks, edges represent interactions between people
Communication Networksemail communication networks with edges representing communication
Citation Networksnodes represent papers, edges represent citations
Collaboration Networksnodes represent scientists, edges represent collaborations (coauthoring a paper)

This article outlines an approach in agentbased computational economics to build a macroeconomic model of the recent housing bubble. The goal of this effort is to gain a better understanding of it's causes and to formulate policy prescriptions. The common view of economists with respect to the housing bubble causes is that the Federal Reserve policies and measures that started in 1993 with respect to interest rates has led to the current crisis. This hypothesis needs to be verified (by observing some emergent properties). This is a model where we observe patterns of development out of the individual interactions of agents qualitatively and try to match them with the empirical data. In contrast to the conventional approach that produces quantitative house price projections based on trends in incomes, interest rates and housing supply and demand, the agentbased approach simulates the interaction of individual agents who seek to buy and sell properties. House prices emerge from this market process. Model Definition
According to McMahon et. all (2009) the main actors in the housing market (people, banks) and the components (houses, mortgages) are modelled as agents. People are either renters or owners of one or more houses. Mortgages are Adjustable Rate Mortgages (ARMs) based on interest rate. Each house is associated with zero or one mortgage that is owned by a bank. PeoplePeople have a fixed income that follows a uniform distribution within some range. They may relocate, which in turn requires to rent or own houses. This decision requires the evaluation of the financial situation with respect to the rent or ownership situation. House owner may evaluate the decision to buy an extra house for investment and rent it out to other people. House owner may decide to sell a house. HousesHouses can be rented or owned. Initial house prices follow a uniform distribution within some range. Houses are foreclosed, due to insufficient funds to pay mortgages or rents. MortgagesThe mortgage is owned by a bank, and is associated with a particular person and a house. Mortgage payments are adjusted to represent the notion of ARMs, possible with some time lag. BanksBanks maintain a balance sheet to keep track of their assets (mortgage payments) and liabilities (mortgage value of the houses owned by the bank). Model initializationHouses are created at a certain density (patchDensity) on the map. Each house is then assigned a price. A fraction of houses are assigned as rentals (rentalDensity) and some percentage is occupied (occupiedPerc) by people. Each person is then assigned an income and People are assigned to houses according to matching income  rental/ownership costs. The parameter patchDensity, rentalDensity, and occupiedPerc can be calibrated to empirical data.The model generates the following output: average house Price, average mortgage cost, number of owned vs. rented houses, banks balance sheet, and percentage of bankrupt people, and average location of houses. First ResultsWith the Axtell & Epstein (1994) model, we can observe patterns of development out of the individual interactions of agents qualitatively and try to test them with the empirical data. Using the actual interest rate scenario 19932009 a big drop in the banks balance sheets can be observed, that corresponds to the decrease in the house prices and the increase in the average mortgage rates. A policy of controlling exogenously for the interest rates leads to the emergence of bubbles and foreclosures. Things to doAs first results have shown, an exogenous control for the interest rates was one of the factors that led to the housing crisis. However, this is considered to be a necessity, but not a sufficiency condition: Future models should link the subprime crisis to the housing market. The model can be further enhanced by including the supply side of the market represented by the construction companies, in order to refine the endogenous emergence of the house prices. References:

The behavior of complex systems is a result of their internal structure. This structure reflects how processes compute information. This is determined by answering three questions:
intrinsic unpredictability (deterministic chaos) emergence of structure (selforganization) Main Idea
Readings

Social networks are networks in which vertices represent people or groups of people and edges are social interaction among them, e.g. friendship. In sociological terms vertices are actors and edges are ties. The study of social networks goes back to 19th century psychiatrist Jacob Moreno who became interested in the dynamics of interactions within groups of people. Moreno [1934] called his diagrams sociograms which later become known as social networks. In his study on schoolchildren he used triangles and circles as vertices to represent boys and girls respectively. A friendship relationship is indicated by an edge connecting two vertices. The diagram reveals that there are many friendships between two boys and two girls, but few between a boy and a girl. Once drawn the diagram, it was easy to see. This is what persuaded social scientists that there was merit in Moreno's methods. Depending on the question one is interested in answering there are many different ways to define an edge in such an network. Edges may represent friendship, professional relationships, exchange of commodities, communication patterns, romantic or sexual relationships, or many other types of connections between individuals. The techniques to probe different types of interaction may involve direct questioning (e.g. interviews) [Rea97, Rap61], direct observation of experimental subjects, the use of archival records (e.g. the "southern woman study" [Davis41]), egocentered data analysis [Burt84, Bern89], affiliation analysis [Davis41, Gal85], smallworld experiments [Mil67, Trav69], snowball sampling [Erick78, Frank79, Thom00], contact tracing and random walk sampling [Klov89, Thom00]. This techniques have been applied to problems like friendship and acquaintance patterns on different scales of the population, e.g. students, professionals, bord of directors, collaboration of scientists, movie actors, musicians, sexual contact networks, dating patterns, covert and criminal networks such as drug users or terrorists, historical networks, online communities and others. A classic problem of social network analysis is to discover clustering. In the reminder of this article we will focus on different empirical methods used to measure social networks. InterviewsAsking people questions is the most common way to accumulate data about social networks. This can be done in the form of direct interviews, by questionaries or a combination of both  each with advantages and disadvantages with respect to the quality of data. A good introduction to social survey design and implementation is Rea and Parker [1997]. Surveys typically employ a name generator  a mechanism to invite respondents to name other nodes in the network as well as their relationship to them in order to explore the network. In the study of Rapoport and Horvath a question to the schoolchildren was to name eight best friends within the school. There are some interesting points to notice about name generators. Nominating other vertices by ties is an asymmetric process. Person A may nominate person B as friend but there is no need that person B has to nominate person A as friend. Therefore it makes sense to represent this data as directed networks. Vertexes in directed networks have two types of degree: indegree  the number of individuals who identified the vertex as friend  and outdegree  the number of friends identified by the vertex. The second point concerns the limit of responses given to the respondents. In the study above the limit was to name up to eight friends. Such fixed choice studies limit the outdegree of the vertices. This cutoff may lead to the loss of information about the smallworld effect of the network, which is caused by a small number of highly connected vertices. However, the indegree is not affected by such cutoffs. Studies based on direct questions are not only laborious, inaccurate and costly. Most of all, the data contain uncontrolled biases. EgoCenteredFor determining network structure, sosiometric studies  such as in the previous section require a survey of all or nearly all of the population. A reconstruction of the complete networks of ties is not possible. Given the high costs to survey large networks, a study of personal networks or egocentered networks may be a feasible alternative. An egocentered network is a network about one individual (ego) and its surrounding immediate contacts (alters). A typical survey would be to sample the population at random and ask them to identify all those with whom they have a certain type of contact. Also, they are asked for information about characteristics of themselves and there alters. This type of of survey is useful in particular if we are interested in the degree of the network. A random sample of degrees can give a reasonable degree statistics. In case we also gather information about contacts between alters, we can also estimate clustering coefficients. If we have data on characteristics of egos and alters we can estimate assortative mixing. ObservationDirect observation over a period of time is an obvious method to construct social networks. This is a rather laborintensive method. It is restricted to small groups, primarily ones with facetoface interactions in public settings. It is the only viable experimental technique for social network studies in animals. Archival DataA highly reliable source of social network data is archival records. Affiliation NetworksAffiliation networks are special kind of social networks to focus on cluster discovering. SmallWorld ExperimentSnowball SamplingContact TracingRandom WalksReferencesThe best general introduction on network theory is the book from Mark Newman [2010]. There is an active research community mainly somehow affiliated with the Santa Fe Institute.

A onedimensional (elementary) cellular automaton consist of a row of cells. Each cell can be in, say one of two states  state '0' and state '1'. At each step there is a rule to update the state of each cell, say based on that cell and its immediate left and right neighbors. The table below is a representation of the rule for this kind of cellular automata. It is a kind of lookup table. The top row contains all possible state combinations for a cell and its left and right neighbor at step n. The botom row specifies the state of that cell at the next step in each of these cases.
