Lab Sessions

HAPPY CODING !!!

DS2 - Spring 2017

Typically Friday 11:30-13:30 (A212) or see the schedule here.

***NEW***: Some research based projects (as master thesis project) will be available soon for students who are interested in the field of distributed systems and secure distributed protocols (see my research page for more information, or feel free to drop me an email for an appointment via channam{dot}ngo{at}unitn{dot}it).

Some possible general topics include but not limited to:

  • Probabilistic distributed consensus protocol, e.g. the Bitcoin protocol
  • Secure multi-party computation on distributed ledger, e.g. FinTech on blockchain

FEEL FREE TO SPREAD THE WORDS!!!

Thanks!

10 - May 26th - Bitcoin (last lab) - Probabilistic Consensus Protocol

Different from other consensus protocol, e.g. PBFT, Bitcoin relies on Proof-of-Work and the so-called Nakamoto Consensus protocol, i.e. probabilistic consensus protocol, to converge. The reason why it is called probabilistic is due to the fact that the convergence is not permanently persistent but it rather depends on the decision of the majority of "mining" power.

We will focus on experiencing the effect of probabilistic consensus in this lab session, meaning your program should be able to show the forks and the fact that each node is switching from branch to branch, use the template and:

  • Initiate as many BitcoinActor as you can (wrt your machine's memory limit).
  • For setting up network topology, we can use the Gnutella code that is provided in Lab 7 to initiate a network (not fully connected).
  • Decide the number of new blocks per "tick" and use a Poisson distribution for determining the nodes that find the PoW.
  • See the result of the experiment for yourself.

9. May 19th - Byzantine Fault Tolerance

The Byzantine generals problem could date back to the '80s in which the nodes in a network suffer from the so-called Byzantine failure, i.e. the failure causes the nodes to act arbitrarily and deviate from the correct behavior.

To solve this problem for a fully-connected network designed to tolerate up to m Byzantine nodes , the first (and also extremely expensive) algorithm Oral-Message required Synchronous - Reliable Broadcast in m + 1 rounds to converge to the correct output. The algorithm is summarized in the lecture slide no. 6 - 8.

The first exercise for the students is to implement this expansive - yet simple to implement - OM(m) algorithm. Remember that for a network of n nodes, it must satisfy that n > 3m to be solvable with OM(m).

The second exercise is a little tweak to the algorithm with the Unforgeable Signature assumption in which the 'order' from the 'General' cannot be forged, i.e. during the relaying of the message, the Byzantine nodes cannot modify the message without being detected. The tweak works as follows:

  • The General sends a signed order to the lieutenants.
  • The Lieutenants record the message, sign and relay until the message contains m signatures.
  • At round m+2, everybody can decide the output.

This assumption removes the requirement n > 3m. Why? Find the answer yourself with your logical reasoning :-)

In case you want to move on to a more complicated settings, try to replace the fully connected network with a 3n-regular network and see for yourself.

8. May 5th - Epidemic: Beyond Dissemination

Beyond dissemination of information, epidemic protocols should also care about the scalability and the ability to deal with churn in current dynamic and large network of nodes.

The two techniques the students will practice in this lab session are Peer Sampling and Aggregation.

  • Peer Sampling and Aggregation both follows very generic algorithms, respectively in slide 5 and 45 of the lecture slide.
  • For the timeout mechanism, follow the example of the previous labs.
  • Implementation of Peer Sampling should be based on the Newscast protocol (slide 6-7)
  • Feel free to choose whichever aggregation function described in slide 51.

Happy Coding!!!

7. April 21th - Peer 2 Peer (part 2) Peer 2 Peer Search with Gnutella (Solution)

The students will go ahead and implement the PING and PONG message of Gnutella protocol as in the lecture slide (62-76).

Happy Coding!!!

April 14th - Happy Easter!!!

6. April 07th - Peer 2 Peer (part 1) Distributed Hash Table with CHORD (Solution)

The students will go ahead and implement the CHORD DHT as in the lecture slide (22-29).

To save you some trouble with the circular range, the helper class CircularKeyRange is provided.

Happy Coding!!!

5. Mar 31st - Consensus Protocols (part 2) Consensus with Failure Detector (cont) (Solution package)

4. Mar 29th - Consensus Protocols (part 1) Consensus with Failure Detector (Exercise package)

DIY - Do It Yourself time

To prepare the students for the upcoming projects, for this lab the students have to try to implement the consensus protocol from a very basic stub (with the support of the previous lab stubs as the students must have gained some experience already through the past lab sessions).

A global clock is provided through the singleton class GlobalClock.

Then simulation of Strong Completeness and (Eventually) (Weak) Accuracy Failure Detector can be done as follows:

  • Initially there is a list of ActorRef (processes) ,i.e. {P1, P2, P3, ..., Pn}
  • Each ActorRef Pi will be centrally assigned (in FDCentralInfo) the "crashed" tick Ci which means the process crash at the specified tick, i.e. this information is available initially to every processes as a map of {P1 = C1, P2 = C2, ..., Pn = Cn}
  • Each other ActorRef Pj will "realize" that the process Pi is crashed at a certain later tick Cij >= Ci, i.e. this information (in FailureDetector) is different from process to process {P1 = {C12, ..., C1n}, P2 = {C21, ..., C2n}, ... Pn = {Cn1, ... Cn(n-1)}}
  • And thus, at a certain tick T, the list of suspects for a process Pi can be constructed as {Pj} where Cij >= Ci
  • Be careful to control the number of crashed processes (remember f < N/2 is necessary for consensus)

Adjusting Ci, Cij can yield different experimental results, try to play with this to see how the consensus is achieved in different settings. Remember one thing, a "crashed" process should not handle a message upon receiving the message.

Questions!!!

  • What is the configuration for Perfect FD?
  • What is the configuration for Strong FD?
  • What is the configuration for Eventually Strong FD?

Warming up!!!

Implement the Reliable Broadcast with Failure Detector with the pseudo code provided in slide 18 of the lecture. Hints:

  • Strong completeness is provided with the above configuration
  • Strong accuracy can be supported with Cij = Ci

Consensus with Eventually Strong FD

The algorithm for consensus is provided in slide 25, 26 of the lecture.

First tweak the configuration for eventually weak accuracy.

Since akka is an actor-message based framework, the students can break down the algorithm in the actor-message based style, some hints are given as follows (the students can try a different way though):

  • Internal variables: r, est, decided, stop, aux, rec, proc
  • Methods:
    • onReceive
      • ProposeMessage(int v) from ApplicationMain, initialize all the internal variables and invoke onPropose()
    • onPropose() sends phase 1 message
    • onReceive
      • Phase1Message(int r, int est, int p) from another process
    • Or coordinator becomes suspected
    • => receive a signal from wait #1, invoke onPropose1()
    • onPropose1() sends phase 2 message
    • onReceive
      • Phase2Message(int r, int aux, int p) from another process = receive a signal from wait #2, invoke onPropose2()
    • onPropose2() receives the replies from other processes and move to onPropose2Majority() when reach majority
    • onPropose2Majority() should only execute once and move to onPropose() again if stop = false

3. Mar 17th - Epidemic Protocols (part 2) Rumor Mongering (Solution)

A small remark is that from now on a basic akka eclipse project is available here if anybody cannot use the Activator.

For this exercise you will implement the Blind/Coin and Feedback/Counter version of the Rumor Mongering protocol. This should be similar and straight-forward as the previous lab (Part 1). Refer to slide no. 31-36 of the lecture.

The necessary files are:

A slight difference is that this lab requires the implementation of an extra method onEpidemicUpdateImpl(AssignMessage message) (corresponding to the update function of Rumor Mongering.

Again, some useful base-class variables and methods are:

  • me to refer to the current actor
  • getValue() and setValue(EpidemicValue v) to get and set the current stored value
  • setEpidemicTimeOut()
  • valueSynced() to print to console the round that the stored value is updated

After the correct implementation of the algorithms, you can play around with the parameters, etc. to see the effect of the changes.

2. Mar 10th - Epidemic Protocols (part 1) Last exercise's stub is clearly a good example of how complication in the skeleton can kill a lab session. In this lab, we do it in another way (that we should leave the complication alone and move on).

For this exercise, you will implement the Push, Pull and finally PushPull version of the Anti-Entropy protocol (see the lecture slide no. 23-25).

But first thing first, do remember to modify application.conf where you can find/put at the \src\main\resources folder for separate thread per Actor.

The ApplicationMain, as usual is a place where we can put experiment configurations, which, in this case is only a parameter epiType:

private static enum EpiType {PUSH, PULL, PUSHPULL};
private static EpiType epiType = EpiType.PUSH;

WARNING: CONSIDER YOUR MACHINE'S CAPABILITY BEFORE INCREASING THE NUMBER OF ACTOR.

The EpidemicActor is a base class to hide all complicated stuffs, please ignore it and only look at the 3 other classes that extend this base class: EpidemicPushActor, EpidemicPullActor, and EpidemicPushPullActor.

In these classes, you need to provide implementation of the 2 following methods (corresponding to the methods upon timeout and upon receive in the lecture slide for each type of Anti-Entropy version):

protected void onEpidemicTimeoutImpl(); // upon timeout do etc.
protected void onEpidemicReceiveImpl(EpidemicMessage message); // upon receive do etc.

A useful method in these implementation would be randomProcess() from the base class to get a random process (of type ActorRef). In the case that we need to specify a sender in the message, the variable me (also of type ActorRef) is to used. The value stored per Actor is a variable value of type EpidemicValue.

Besides, as each version of the protocol requires different information, you also may want to extend the base message class EpidemicMessage (that contains only a value of class EpidemicValue) to other subclasses such as EpidemicPushMessage, EpidemicPullMessage and EpidemicPushPullMessage.

A default log is integrated to show that "an actor A syncs a value v at a round t", you can observe that to learn better the execution of each version of the protocol. You can add more logs if you want.

The solution for the exercise can be found here. Advanced exercise could be:

  • Synchronization of multiple values (instead of a single value)
  • There is a weak point in the EpidemicActor implementation which is the time out mechanism utilizes a Thread and while(true) loop. Try to use Scheduler to improve this.
  1. Mar 3rd - Reliable Broadcast: Slide Exercise (application.conf and ApplicationMain)

For the first exercise, you will implement various types of broadcast, including Reliable Broadcast with FIFO order. Advanced exercise includes the implementation of 2 different types of Causal Order (non-blocking and blocking).

The slide is a general guideline for what we can do during the lab session while the exercise is a java stub for you to try and implement broadcast algorithms. The ApplicationMain is an example to show how can we test and see the result of the broadcasts.

There are flags that we can turn on/off to indicate which broadcast algorithm we want to use, e.g.

private boolean simulateFailureFIFO = false; //Without this FIFO cannot fail
private boolean simulateFailureFIFO100 = false; //We use this to simulate 100% failure for FIFO, i.e. the first message arrives last
private boolean useCausalOrder = false; //To use causal order
private boolean isCausalBlocking = false; //To use causal order blocking version

And of course the more true flags the harder to implement :-)

An important note is that we need separate threads for each Actor so we have to declare this in application.conf where you can find/put at the \src\main\resources folder.

This is the solution for the exercise, feel free to contact me regarding bugs or questions!!!