Telecom Paris
Dep. Informatique & Réseaux

J-L. Dessalles Home page

November 2019



Module Athens TPT-09

    Emergence in Complex Systems      From nature to engineering
                                other AI courses

Contents


Please answer in English!
    This Lab work is based on the use of Evolife. If necessary, install it.

Emerging behaviour

Collective decision

In many cases in animal kingdom, important decisions are taken collectively, without any individual having the last word. Similarly, a collective phenomenon has been invoked to explain the sudden growth of demonstrations that led to the fall of the Berlin Wall in 1989. A similar situation obtains in the brain, where neural assembly ‘decide’ to become synchonous. This study aims at showing that collective decisions may be sudden and clear-cut, even if all individual agents tend to hesitate.

In this story, the swallows’ tendency to perch (on a wire, say) increases smoothly with time. But swallows tend to imitate each other: their tendency to join a group of perched birds is proportional to the group’s size. The simulation shows that at some point, a collective decision takes place. This offers an insight into the kind of collective processes involved when birds decide to migrate away.

The program Swallows.py (in the directory Other/Swallows) implements collective decision. You can run it by executing the local command Starter in that directory. The Configuration Editor allows you to change parameter values.

The blue curve shows the proportion of the population perched. Observe that the number of swallows on the wire increases dramatically at some point in time. The yellow curve shows the increasing perching probability (x 1000). Run the simulation several times with the same parameters. Observe that the decision point varies.

Swallows
Look at the definition of the various parameters (NbAgents, TimeSlope, GroupSlope, GroupOffset, Inertia). How does collective decision time vary depending on each of these parameters?

Sooner with larger population Later with larger population
Sooner with steeper time slope Later with steeper time slope
Sooner with steeper group slope Later with steeper group slope
Sooner with larger group offset Later with larger group offset
Sooner with larger inertia Later with larger inertia


    

Suggestions for further work

Ants

The program Ants.py (in the directory Other/Ants) is a preliminary sketch to illustrate a simple model of collective foraging behaviour. Though individual agents follow erratic paths to find food, the collective may discover optimal paths.

In this story, ‘ants’ move in search for food. They move randomly, avoiding cells that have been already visited and marked by a repulsive pheromone. When they find food, they head back toward the colony while laying down attractive pheromone in a decreasing way.

Run the program by executing the local command Starter. You should see that after a while, ants reach food sources and exploit them until they are exhausted.


NegativePheromone
This implementation makes use two kinds of pheromones, a negative one and a positive one. Set the quantity of negative pheromone to zero. Do you notice any change? To make comparisons, you may set the Simulation/RandomSeed to any non-zero value to make the program deterministic.


    


    Reinstall negative pheromone.


Exploitation
Locate food sources deterministically at various distances from the nest (e.g. along a line, not aligned with the nest). You may also vary the number of food sources. Copy the corresponding portion of code in the box below. Does the ant colony exhaust near sources before far sources? Describe your observations in a few precise words.


    


PheromoneControl
Some parameters are involved in pheromone control (NPDeposit, PPMax, Evaporation, SniffingDistance, Saturation, Exploration). Choose one of them, study its effect and make a precise description of your results.


    

Suggestions for further work

Ants.py is kept basic. You may improve on the following points:

AntNet

If node A sends a message to node B, the message has to go through a set of intermediate nodes because A and B are not directly connected. One possible shortest path for the message is the one indicated by thick lines and arrows, which takes the message from A to B in 5 steps. If, however, node N breaks down or is highly congested, the message needs to be rerouted dynamically toward a slightly longer route that goes through nodes N’ and N’’. Although it now takes 6 hops for the message to be transmitted from A to B, the actual transmission time will be reduced and the message will be less likely to be lost.

In classical networks:


The principle of AntNet is to use "ants" that travel through the network. When such an "ant" reaches a node, its age tells something about the path it went through to reach the node. The AntNet procedure goes like this.
The second column in the table shown to the right gives probabilities of direction for an ant travelling to node 2. In the last column, we see probabilities that are updated by an ant coming from node 5.

The routing table r gives the probability at node i, when heading to node d, of choosing n as next node The table is updated by δr by an upcoming ant.
The table is updated by an upcoming ant. An ant originating from d and coming from n0 will increase the probability for a message to choose n0 as next node by δr (and all probabilities are divided by 1 + δr so that they sum up to 1).


δr decreases with the ant’s age.

The program AntNet.py (in the directory Other/AntNet) is a preliminary implementation of the AntNet algorithm. ’Ants’ move probabilistically through the network and get older each time they reach a node. At each node, they update the local routing table.

Run the program by executing the local command Starter. You may run the program on a simple network (described in the file Network1.ntw) by loading the configuration file AntnetTest_Typical.evo. Or you can generate a random network by loading the configuration file Antnet_Typical.evo. You should see that after a short while, a test message (in blue) finds its way through the network between the two nodes indicated by larger dots. You may change the speed of moving ants (to accelerate display) using the ruler.

AntNetNumnerOfAnts
Diminish the number of ants in either experiment (test or random). How many ants are necessary for the network to find the shortest path ? Provide an example.


    

Suggestions for further work

AntNet.py is kept basic. You may improve on the following points:

Bibliography

Small worlds

    [by Félix Richart]

Many networks of real life are randomly generated: the Facebook network of friendship, the Web, the Twitter network of followers, the sex-relation graph (who had sex with whom), the citation graph (who cites whom in scientific papers). These "natural networks" (natural in the sense that they are generated by different individuals without global concertation) look similar. They are all "scale-free" networks.

A network is said "scale-free" if its degree-distribution follow a power-law: A huge number of nodes have only a few neighbours, while a small number of nodes have a lot of neighbours.
If you take the example of a graph where nodes are Facebook users and vertices are friendship links between them, then this network is a scale-free network: A few people have a huge number of friends, and most of us have only from 50 to 500 friends.
This property of natural networks can be explain by the notion of "Preferential Attachment". It means that, when a node choses its neighbours, it will prefer nodes with higher degree. For example if you are in a group of people, and you don’t know anyone, you will naturally try to be friend with the most popular person of the group.

To simulate preferential attachment, go to the Evolife/Other/PreferentialAttachment folder and run the program through the local command starter.
This program simulates a population of individuals who have to choose other individuals to link to.

Choose strategy ‘random’ in the ‘Network’ section. Each node send a (bidirectional) link towards several other nodes that are randomly chosen. Run the program with 400 nodes and 4 links per nodes.

RandomGraphRandomAttachment
Observe the distribution of degree
(note: the scale is logarithmic (base 2); the distribution starts from LinksPerNode+1 and not from 0).

Explain why the maximum of the degree distribution corresponds to 2 * LinksPerNode.



    

We consider now a crude implementation of preferential attachment. The algorithm behind it is quite simple: When a node wants to find a neighbour, it randomly picks a set of nodes from the graph. It then studies this set to see wich node receives most links from other nodes of the set. This amounts to choosing as your next friend the most popular individual in your social circle.
This method leads to a global scale-free network where a small sub-set of people are "friends" with a very large number (this supposes that "friends" do not need to spend time together, as in Twitter and not as in real life).

Choose the ‘PA’ strategy (preferential attachment) and set SampleSize to a large size, typically one quarter of the total number of nodes. Run the program again. You should observe that some nodes get more popular and have more neighbours.

RandomGraphPA
What can you say about the curve formed by the blue dots (degree distribution)?
Why is it so?


    

RandomGraphPA2
How does it compare with the ‘random attachment’ case?


    

The preceding method is time consuming. Each agent has to spend time studying a representative sample of the population. If you set SampleSize to a lower value such as one tenth of the network’s size, then you may observe that the effect disappears.

Preferential attachment may result from much lighter mechanisms.
Suppose that our nodes represent scientists. Two scientists in the graph are connected if they wrote a paper together. Imagine you are a scientist, and you want to find someone to write a paper with. You could use the above method and select the most popular author among all your fellow scientists, i.e. the one who wrote the most papers with your other friends. But you are a lazy scientist! So you adopt a much easier strategy. You just pick up a random paper in your domain and decide to work with one of the scientists who wrote it. In other words, you just select an edge of the graph and choose one of the nodes connected by this edge.

RandomGraphPick
In the findNeighbour function of the Graph class, implement this method for finding a neighour
(the edge list self.Links is a list of couples representing connected nodes). Test the method by choosing this ‘picking’ strategy. copy your method below.


    

RandomGraphPick2
What do you observe concerning the blue dot curve?
Do you have any idea why it is so?


    

Cocktail party

(click on image to animate)

In a cocktail party, everyone must raise the voice to cover the noise of neighbouring conversations. Inevitably, the global noise level increases and conversational circles shrink.
The basic situation is implemented in the program cocktail.py (directory Other/Cocktail), launched by the local command ‘Starter’.
Agents are randomly located on a 2-D space and attempt to engage conversation with their neighbours, trying to be heared from them while sparing energy.

At the beginning, agents just talk (or sing) by themselves. You can see that agents progressively increase their voice level. The situation may stabilize at an intermediary level or evolve to a situation in which everyone shouts at the maximum level.

Note that the system is quite sensitive to parameter values:


There is a possibility of displaying the surrounding noise (parameter NoiseDisplay), but it slows down the simulation.

Consider the Influence parameter. Below a minimal value (e.g. 20), neighbouring cells cannot even hear the minimal voice level (e.g. 5).
Observe that even that minimal value will most probably lead eventually to noise surge.

Try to find a density threshold and influence combination below which noise remains localized.

Open Cocktail.py and locate the attenuation function. As you can see, voice attenuation depends linearly on the inverse of distance.

Cocktail
Change this for a steeper decrease, e.g. inverse of the square of the distance (copy the python line in the box below). Can you easily create a situation in which noise does emerge but remains localized? For which values of the parameters? Can you see a difference with the inverse-linear case?


    

Suggestions for further work


            

Back to the main page