Abstracts


A Computational Approach to Packet Classification

Alon Rashelbach, Ori Rottenstreich, and Mark Silberstein


Multi-field packet classification is a crucial component in modern software-defined data center networks. To achieve high throughput and low latency, state-of-the-art algorithms strive to fit the rule lookup data structures into on-die caches; however, they do not scale well with the number of rules. We present a novel approach, NuevoMatch, which improves the memory scaling of existing methods. A new data structure, Range Query Recursive Model Index (RQ-RMI), is the key component that enables NuevoMatch to replace most of the accesses to main memory with model inference computations. The use of RQ-RMI allows the rules to be compressed into model weights that fit into the hardware cache. Further, it takes advantage of the growing support for fast neural network processing in modern CPUs, such as wide vector instructions, achieving a rate of tens of nanoseconds per lookup. Our evaluation using 500K multi-field rules from the standard ClassBench benchmark shows a geometric mean compression factor of 4.9x, 8x, and 82x, and average performance improvement of 2.4x, 2.6x, and 1.6x in throughput compared to CutSplit, NeuroCuts, and TupleMerge, all state-of-the-art algorithms.


Distributed Testing of Graph Isomorphism in the CONGEST Model

Reut Levi and Moti Medina


In this paper we study the problem of testing graph isomorphism (GI) in the CONGEST distributed model. In this setting we test whether the distributive network, G_U, is isomorphic to G_K which is given as an input to all the nodes in the network, or alternatively, only to a single node. We first consider the decision variant of the problem in which the algorithm should distinguish the case where G_U and G_K are isomorphic from the case where G_U and G_K are not isomorphic. We provide a randomized algorithm with O(n) rounds for the setting in which G_K is given only to a single node. We also prove that the number of rounds of any deterministic algorithm is Ω̃(n²) rounds, where n denotes the number of nodes. Our algorithm can be adapted to the semi-streaming model, where a single pass is performed and Õ(n) bits of space are used. We then consider the property testing variant of the problem, where the algorithm is only required to distinguish the case that G_U and G_K are isomorphic from the case that G_U and G_K are far from being isomorphic. We show that every algorithm, requires Ω(D) rounds, where D denotes the diameter of the network. We provide a randomized algorithm with of O(D+(ε^{-1}log n)²) rounds that is suitable for dense graphs. We also show that with the same number of rounds each node outputs its mapping according to a bijection which is an approximate isomorphism.


A Deep Learning Approach for Detecting IP Hijack Attacks.

Tal Shapira & Yuval Shavitt


The Internet consists of thousands of Autonomous Systems (ASes), each AS advertises one or more IP address prefixes (APs) using the Border Gateway Protocol (BGP). In recent years, there have been many reports of BGP Prefix hijacking of nations and large companies, as more than 40% of the network operators reported that their organization had been a victim of a hijack in the past.

In this work, we harness Deep Learning to detect IP hijack attacks and gain additional insight into the Internet structure. First, we build on the excellent results achieved for NLP tasks and create a dense vector representation of AS numbers (ASNs), called BGP2Vec. As a replacement to sentences used in NLP, we use AS-level routes, such as the ones used in BGP announcements. Our results show that indeed such embedding reveals latent characteristics of the ASNs.

Next, we use the difference between the coordinate vectors representing neighboring ASNs in a route to indicate their type of relationship (ToR). This allows us to solve the long-studied problem of ToR identification. We found out that similar ToRs are embedded into the same vicinity; therefore, we can use the neighbors of a new ToR to classify it. This result allows us to use valley-free routing rules in order to detect hijack attacks. Furthermore, we also train a model with complete routes to identify hijacked routes. This allows the system to learn also small deviations from valley free routing, that are due to complex ToRs.


Towards Scalable Verification of Deep Reinforcement Learning.

Guy Amir, Michael Schapira and Guy Katz


Deep neural networks (DNNs) have gained significant popularity in recent years, becoming the state of the art in a variety of domains. In particular, deep reinforcement learning (DRL) has recently been employed to train DNNs that act as control policies for various types of real-world systems. In this work, we present the whiRL 2.0 tool, which implements a new approach for verifying complex properties of interest for such DRL systems. To demonstrate the benefits of whiRL 2.0, we apply it to case studies from the communication networks domain that have recently been used to motivate formal verification of DRL systems, and which exhibit characteristics that are conducive for scalable verification. We propose techniques for performing k-induction and automated invariant inference on such systems, and use these techniques for proving safety and liveness properties of interest that were previously impossible to verify due to the scalability barriers of prior approaches. Furthermore, we show how our proposed techniques provide insights into the inner workings and the generalizability of DRL systems. whiRL 2.0 is publicly available online.


A Devil of a Time: How Vulnerable is NTP to Malicious Timeservers?

Yarin Perry, Neta Rozen-Schiff, Michael Schapira


The Network Time Protocol (NTP) synchronizes time across computer systems over the Internet and plays a crucial role in guaranteeing the correctness and security of many Internet applications. Unfortunately, NTP is vulnerable to so called time shifting attacks. This has motivated proposals and standardization efforts for authenticating NTP communications and for securing NTP \textit{clients}. We observe, however, that, even with such solutions in place, NTP remains highly exposed to attacks by malicious \textit{timeservers}. We explore the implications for time computation of two attack strategies: (1) compromising \textit{existing} NTP timeservers, and (2) injecting \textit{new} timeservers into the NTP timeserver pool. We first show that by gaining control over fairly few existing timeservers, an \textit{opportunistic} attacker can shift time at state-level or even continent-level scale. We then demonstrate that injecting new timeservers with disproportionate influence into the NTP timeserver pool is alarmingly simple, and can be leveraged for launching both large-scale \textit{opportunistic} attacks, and strategic, \textit{targeted} attacks. We discuss a promising approach for mitigating such attacks.


CELL: Counter Estimation for Per-flow Traffic in Streams and Sliding Windows

Rana Shahout, Roy Friedman and Dolev Adas


Measurement capabilities are fundamental for a variety of network applications. Typically, recent data items are more relevant than old ones, a notion we can capture through a sliding window abstraction. These capabilities require a large number of counters in order to monitor the traffic of all network flows. However, SRAM memories are too small to contain these counters. Previous works suggested replacing counters with small estimators, trading accuracy for reduced space. But these estimators only focus on the counters’ size, whereas often flow ids consume more space than their respective counters. In this work, we present the CELL algorithm that combines estimators with efficient flow representation for superior memory reduction.

We also extend CELL to the sliding window model, which prioritizes the recent data, by presenting two variants named RAND-CELL and SHIFT-CELL. We formally analyze the error and memory consumption of our algorithms and compare their performance against competing approaches using real-world Internet traces. These measurements exhibit the benefits of our work and show that CELL consumes at least 30% less space than the best-known alternative. The code is available in open source.


Cross Layer Attacks and How to Use Them (for DNS Cache Poisoning, Device Tracking and More)

Amit Klein


We analyze the prandom_32 PRNG of the Linux kernel (which is the kernel of the Linux operating system, as well as of Android) and demonstrate that this PRNG is weak. We focused on three prandom_u32 consumers at the network level – the generation algorithms for UDP source port, IPv6 flow label and IPv4 ID. The flawed prandom PRNG is shared by all these consumers, which enables us to mount “cross layer attacks”. In these attacks, we infer the internal state of the prandom_u32 from one OSI layer, and use it to either predict the values of the prandom_u32 employed by the other OSI layer, or to correlate it to an internal state of the PRNG inferred from the other protocol. Using this approach we can mount a very efficient DNS cache poisoning attack against Linux. We collect TCP/IPv6 flow label values, or UDP source ports, or TCP/IPv4 IP ID values, reconstruct the internal PRNG state, then predict an outbound DNS query UDP source port, which speeds up the attack by a factor of x3000-x6000. This attack works remotely, but can also be mounted locally, across Linux users and across containers, and (depending on the stub resolver) can poison the cache with arbitrary DNS records. Additionally, we can track Linux/Android devices – we collect TCP/IPv6 flow labels and/or UDP source port values and/or TCP/IPv4 IDs, reconstruct the PRNG internal state and correlate this state to previously extracted PRNG states to identify the same device.


Graph Signal Processing for Electrical Networks

Tirza Routtenberg, Lital Dabush, Gal Morgenstern


The power grid is one of the largest and most complex cyber-physical networks. It relies on computers, communication, and physical networks to manage the generation, transmission, and distribution of electricity. The modern power grid is vulnerable to cyber-physical attacks and has unobservable parts due to the addition of renewable energy sources, failures, and malicious attacks. Thus, the development of enhanced signal processing methods for parameter estimation and event detection in electrical networks is essential for the reliability, security, and stability of the system.

In this work, we investigate key aspects of graph signal processing (GSP) for different monitoring tasks in electrical grids by using the graphical properties of the electric network. In particular, we develop methods for 1) state estimation in unobservable systems; 2) identification of line outages; and 3) detection of false data injection (FDI) attacks of various natures. We test the new methodologies in different scenarios using numerical simulations of power data. The establishment of new GSP-based strategies for various tasks aims to strengthen the network's estimation and detection capabilities and increase the security and reliability of smart grids.


Call Admission and Assignment in Cellular Networks with Vehicular Relay Nodes

Ran Levy, Hanoch Levy


In cellular networks, a base station can serve thousands of users, each located in a different location. Cellular technology is limited by the resources deployed and does not fit varying demands and high load peaks. To deal with these challenges, a new deployment scheme proposed to leverage parked vehicles, to function as Vehicular Relay Nodes (VeRNs). The VeRNs, equipped with a higher gain cellular antenna compared to the user, can boost the cellular network performance by relaying the data from the user.

We study the problems of Call Admission (Upon user arrival, should the system accept or reject the user?) and Call Assignment (Upon user admission, to which VeRN should the user be linked?), which are key to efficient operation. To achieve an efficient and practical solution, we propose to decouple this problem and solve it via two weakly- coupled algorithms. For Admission, we formulate a non-trivial Markov Decision Process, introducing a dynamic operator that stochastically accounts for the possible assignments. For the Assignment problem, we derive a local Deep Reinforcement Learning algorithm using Imitation Learning, while introducing several novel improvements. Performance evaluation shows that these strategies offer a significant improvement over baselines.


Dynamic Architecture based on Network Virtualization and Distributed Orchestration for Management of Autonomic Network

Guy Saadon, Yoram Haddad, Noemie Simoni


In network management architectures of 5G and IoT networks, standardization groups often consider the network resource virtualization layer between the physical network and the SDN controller, as a means to allow deployment and placement of network services with their virtual network functions. However, the following question arises: is this layer enough to react to real-time changes originating from customers or the network without interrupting the service? We consider that a dynamic architecture should allow different and evolving assemblies to be provisioned during a session, in order to meet modification requests without requiring total redesign of the network service. Therefore, in this study, we propose an enhanced architecture. This novel architecture adds a network virtualization layer above the SDN controller with its associated orchestrator. Then, efficiently distributing orchestration among the different layers ensures network autonomy. In this context, we show how real-time service modifications and network failures are handled without losing the existing services and how network management gains additional dynamicity and flexibility.


Efficient In-network Computing with Limited Resources

Raz Segal, Chen Avin, Gabriel Scalosub


In-network computing via smart networking devices is a recent trend for modern datacenter networks. State-of-the art switches with near line rate computing and aggregation capabilities are developed to enable, e.g., acceleration and better utilization for modern applications like big data analytics, and large scale distributed and federated machine learning.

We formulate and study the problem of activating a limited number of in-network computing devices within a network, aiming at reducing the overall network utilization for a given workload. Such limitations on the number of in-network computing elements per workload arise, e.g., in incremental upgrades of network infrastructure, and are also due to requiring specialized middleboxes, or FPGAs, that should support heterogeneous workloads, and multiple tenants.

We present an optimal and efficient algorithm for placing such devices in tree networks with arbitrary link rates, and further evaluate our proposed solution in various scenarios and for various tasks. Our results show that having merely a small fraction of network devices support in-network aggregation can lead to a significant reduction in network utilization. Furthermore, we show that various intuitive strategies for performing such placements exhibit significantly inferior performance compared to our solution, for varying workloads, tasks, and link rates.


SDN Wireless Controller Placement Problem-The 4G LTE-U Case

Aviram Zilberman, Yoram Haddad, Sefi Erlich, Yossi Peretz, Amit Dvir


To cope with the shortage of available licensed spectrum, 4th Generation Long Term Evolution (4G LTE) is expected to be deployed also in the unlicensed spectrum . This raises the problem of the coexistence of multiple operators. Obviously, a Software Defined Networking (SDN) based control approach can utilize the radio bandwidth by coordinating between multiple LTE-Unlicensed (LTE-U) operators. Within SDN control plane, there is a well-known issue namely the Controller Placement Problem (CPP) which has a major impact on the efficiency of the control plane. This paper addresses the Wireless Controller Placement Problem (WCPP) when the SouthBound interface (SBi) is based on 4G LTE unlicensed. The proposed solution improves the throughput, link failure probability and the transparency on the SBi from concurrent transmissions on the control plane. Two heuristic solutions are considered, one is a simulated annealing based and the other is a ray-shooting based. The simulation results show that they are fast and efficient and the ray-shooting based heuristic out-performs the simulated annealing based heuristic. In addition, it is shown that the transparency improves in bigger networks.


Multiparty Interactive Communication with Broadcast Links

Manuj Mukherjee, Ran Gelles


We consider computations over networks with multiple broadcast channels that intersect at a single party. Each broadcast link suffers from random bit-flip noise that affects the receivers independently. We design interactive coding schemes that successfully perform any computation over these noisy networks and strive to reduce their communication overhead with respect to the original (noiseless) computation.

A simple variant of a coding scheme by Rajagopalan and Schulman (STOC 1994) shows that any (noiseless) protocol of R rounds can be reliably simulated in O(R log n) rounds over a network with n=n1n2+1 parties in which a single party is connected to n2 noisy broadcast channels, each of which connects n1 distinct parties. We design new coding schemes with improved overheads. Our approach divides the network into four regimes according to the relationship between n1 and n2. We employ a two-layer coding where the inner code protects each broadcast channel and is tailored to the specific conditions of the regime inconsideration. The outer layer protects the computation in the network and is generally based on the scheme of Rajagopalan and Schulman adapted to the case of broadcast channels. The overhead we obtain ranges from O(log logn2) to O((logn2 log log n1)/ log n1) and beats the trivial O(log n) overhead in all four regimes.


Autonomous NIC Offloads

Boris Pismenny, Haggai Eran, Aviad Yehezkel, Liran Liss, Adam Morrison, Dan Tsafrir


CPUs routinely offload to NICs network-related processing tasks like packet segmentation and checksum. NIC offloads are advantageous because they free valuable CPU cycles. But their applicability is typically limited to layer≤4 protocols (TCP and lower), and they are inapplicable to layer-5 protocols (L5Ps) that are built on top of TCP. This limitation is caused by a misfeature we call “offload dependence”, which dictates that L5P offloading additionally requires offloading the underlying layer≤4 protocols and related functionality: TCP, IP, firewall, etc. The dependence of L5P offloading hinders innovation, because it implies hard-wiring the complicated, everchanging implementation of the lower-level protocols.

We propose “autonomous NIC offloads,” which eliminate offload dependence. Autonomous offloads provide a lightweight software-device architecture that accelerates L5Ps without having to migrate the entire layer≤4 TCP/IP stack into the NIC. A main challenge that autonomous offloads address is coping with out-of-sequence packets.

We implement autonomous offloads for two L5Ps: (i) NVMe-over-TCP zero-copy and CRC computation, and (ii) https authentication, encryption, and decryption. Our autonomous offloads increase throughput by up to 3.3x, and they deliver CPU consumption and latency that are as low as 0.4x and 0.7x, respectively. Their implementation is already upstreamed in the Linux kernel, and they will be supported in the next-generation of Mellanox NICs.


Rethinking Zero Copy Networking with MAIO

Alex Markuze


Sockets have been the de-facto standard API for network I/O.

But, with the advent of high-speed Ethernet, the performance overhead of BSD Sockets became too high to ignore. Attempts to avoid these overheads have spurred a trend kernel bypass techniques, e.g., DPDK and AF_XDP. These methods attempt to avoid the performance penalties associated with BSD Sockets, i.e., memory copy, system calls, and a slow network stack.

However, with great performance comes the responsibility of re-creating a network infrastructure in userspace. Kernel developers attempt to close the gap by adding new capabilities, most notably XDP and MSG_ZEROCOPY, but none of these solutions is a panacea. In this work, we propose a new paradigm for userspace networking, aiming to shrink the performance gap between sockets and kernel bypass techniques, allowing application developers to keep the socket API ant the network stack without compromising on performance.


DNSSEC: Make or Break Attacks on DNS

Yehuda Afek, Anat Bremler-Bar, Daniel Dubnikov


Several different DNSSEC configurations have been suggested in recent years in an attempt to address different security and privacy issues in the DNS system. Here we analyze the different configurations, showing that while each solves one issue, it exposes one or two other issues. DNSSEC, a crypto based security mechanism in the DNS system, is primarily designed to provide authenticity of DNS responses, mitigating DNS poisoning and other unauthorized responses. Two other major issues, in addition to authenticity, that surround DNSSEC are DDoS attacks and privacy issues. While the initial configurations provide authenticity, they open the door to DDoS attacks, due to the much larger message size, and extra cryptography computations. However, attempts to mitigate the non-existent-domain DDoS (NXDomain Flood) attacks have opened the door to DNS zone enumeration attacks. A key parameter in the different DNSSEC configurations is online versus offline signing. Following we argue that some form of online signing is a must in order to provide privacy and authenticity, and present an efficient mechanism to support online signing with minimal degradation in Performances.


Self-adjusting Advertisement of Cache Indicators with Bandwidth Constraints

Itamar Cohen, Gil Einziger, and Gabriel Scalosub


Cache advertisements reduce the access cost by allowing users to skip the cache when it does not contain their datum. Such advertisements are used in multiple networked domains such as 5G networks, wide area networks, and information-centric networking. The selection of an advertisement strategy exposes a trade-off between the access cost and bandwidth consumption. Still, existing works mostly apply a trial-and-error approach for selecting the best strategy, as the rigorous foundations required for optimizing such decisions is lacking.

Our work shows that the desired advertisement policy depends on numerous parameters such as the cache policy, the workload, the cache size, and the available bandwidth. In particular, we show that there is no ideal single configuration. Therefore, we design an adaptive, self-adjusting algorithm that periodically selects an advertisement policy. Our algorithm does not require any prior information about the cache policy, cache size, or work-

load, and does not require any apriori configuration. Through extensive simulations, using several state-of-the-art cache policies, and real workloads, we show that our approach attains a similar cost to that of the best static configuration (which is only identified in retrospect) in each case.


Distributed Computing with the Cloud

Yehuda Afek, Gal Giladi, Prof. Boaz Patt-Shamir


Motivated by cloud storage (`a la Dropbox, Google Drive, etc.), we investigate distributed computing in message passing networks that contain a passive node that can only store and share data, and does not carry out any computations. Using basic primitives of collaborative transmission of a file from and to the cloud, we implement more complex tasks where the goal is to combine input values: e.g., each node holds a vector (or a matrix) as input and the sum (or product) of all the inputs should be stored in the cloud. We present near-optimal algorithms for these tasks. Finally we consider applications such as federated learning and file deduplication in this new model. Our results show that utilizing both node-cloud and node-node communication links can substantially speed up computation with respect to systems where processors communicate either only through the cloud or only through the network links.


Optimal ML-based control for real-time video communication

Mark Shifrin, Uri Avni


We consider a controllable real time video stream over a stochastic channel. The stream, which is transmitted over a designated custom protocol, undergoes encoding prior to transmitting the frames. Low latency and minimal loss rate at the receiving side are important. We model the communication state according to periodic channel parameter measurements, which may include packet loss, RTT, congestion estimation and others. The basic control actions include adjustment of the source rate at the encoder output, percentage of FEC attached to this stream, and more. While FEC is effective in reducing losses, it can be harmful when overly consuming bandwidth and potentially creating more congestion, thus implying a non-trivial tradeoff. We define a custom reward function which reflects the sender’s quality perception of the received video. Naturally, it is increasing in the amount of good data, and decreasing in lost data and in utilized FEC. The goal is to maximize the long-run reward summation. While an effective result might be achieved by retrieving the optimal policy from a solution to the underlying MDP, the time needed to accumulate the needed statistics is unacceptable. One of the possible solutions to address this is to apply Q-learning algorithm or alike. However, one must also adjust the solution to an impatient receiver that cannot wait for convergence. To solve this, we introduce an improved version of TD(λ) learning combined with a heuristic policy as a learning domain. We discuss first results of our approach and numerous future challenges.


Stochastic Epidemic Processes in Networks: The Effects of Spreading Heterogeneity and Control Mechanisms

Jhonatan Tavori, Hanoch Levy


Epidemic processes are widely used as an abstraction for various real-world and networking phenomena such as human infections, computer viruses, rumors in social networks and others. The processes of pandemic and network-spread computer-viruses bear significant similarities, implying they may share models and results. Due to the recent actuality and interest in COVID-19, our story will focus on this case study.

Under the SIR model, the process dies out at a point called Herd Immunity Threshold, whose estimations guide strategic decisions concerning the COVID-19 pandemic. Recent studies showed that spreading heterogeneity plays a major role in affecting the HIT.

We propose a stochastic model, and propose that the spreading intensity and heterogeneity of a node are composed of two stochastic functions. The first reflects continual properties of each node and the second reflects occasional spreads across the network. Consequently, studying the spreading dynamics requires the analysis of a stochastic process consisting of those two functions which drastically differ on their dynamics. Our results reveal that different societies may engender significantly different HITs.

Our model is used for addressing operational aspects and examine the effectiveness of control mechanisms used to mitigate the spread. We reveal that although different policies might have similar immediate impact, not all are “born equal”: While some lockdowns decrease the HIT, others increase it and may be counter-productive in the long-run. The results can guide network administrators on how to optimize their network connections to limit the scope of the spread.


Localhost Detour from Public to Private Networks

Dor Israeli, Yehuda Afek, Anat Bremler-Barr, Alon Noy


This paper presents a new localhost browser based vulnerability and corresponding attack

that opens the door to new attacks on private networks and local devices. We show that this

new vulnerability may put hundreds of millions of Internet users and their IoT devices at risk.

Following the attack presentation we suggest three new protection mechanisms to mitigate this vulnerability. This new attack bypasses recently suggested protection mechanisms designed to stop browser based attacks on private devices and local applications.


KOSS - Kubernetes Optimize Service Selection

Daniel Bachar, Prof. Anat Bremler-Barr, Prof. David Hay


With the advent of cloud and container technologies,enterprises develop applications through microservices architec-ture. This design pattern is mostly managed by orchestrationsystems (e.g. Kubernetes), that groups the microservices into clus-ters. Current deployments (e.g., Submariner) provide the abilityto connect multiple clusters such that various microservices cancommunicate with each other. In such a multi-cluster setting,copies of the same microservice may be deployed in differentgeo-locations, each with different cost and latency penalties.Yet, current service selection and load balancing mechanismsdo not take into account these locations and correspondingpenalties. We presentKOSS, a novel solution for optimizingthe service selection, given a certain microservice deploymentamong the clusters in the system. Our solution transparentlyutilizes the currentservice discoveryprocess. Our simulationsshow a significant reduction of the outbound traffic cost by upto 29% and the latency by up to 85% when comparing oursolution to the currently deployed service selection mechanisms(e.g. Submariner’s).


Time-efficiency and Reliability in NFV Networks

Roi Ben Haim and Ori Rottenstreich


Reliability and time-efficiency are two key elements to consider in network design. Commonly, each is measured per service - availability probability of a specific service, the latency of a specific service, and overall - system average reliability and average latency, considering the demand for every service. Intuitively, minimizing latency requires minimizing the number of network elements a service makes use of. In a non-redundant environment, this would also guarantee the maximal reliability of a service, as reliability degrades when relying on more elements. However, reliability is often guaranteed by allocating backup resources. We explain that such redundancy or the joint support for multiple services can impose a trade-off between reliability and time-efficiency criteria. In this paper, we study the conditions for the existence of such a trade-off and design solutions that jointly take care of both design goals. In case a trade-off exists, we find and evaluate an upper bound to an optimal solution for the non-optimized criterion per different system parameters. We confirm our theoretical upper bounds using experimental results.


Building an edge-queued datagram service for all datacenter traffic

Vladimir Olteanu, Haggai Eran, Dragos Dumitrescu, Adrian Popa, Georgios Nikolaidis, Mark Silberstein, Mark Handley, Costin Raiciu


Modern datacenters support a wide range of protocols and in-network switch enhancements aimed at improving performance.

Unfortunately, the resulting protocols often do not coexist gracefully because they inevitably interact via queuing in the network. In this talk we describe EQDS, a new datagram service for datacenters that moves almost all of the queuing out of the core network and into the sending

host. This enables it to support multiple (conflicting) higher layer protocols, while only sending packets into the network according to any receiver-driven credit scheme. EQDS can transparently speed up legacy TCP and RDMA stacks, and enables transport protocol evolution, while benefiting from future switch enhancements without needing to modify

higher layer stacks. We show through evaluation of multiple implementations that EQDS can reduce FCT of legacy TCP by 2x, improve the NVMeOF-RDMA throughput by 30%, and safely run TCP alongside RDMA on the same network.


Iot or not: Identifying iot devices in a short time scale‏

Anat Bremler-Barr, Haim Levy, Zohar Yakhini


In recent years the number of IoT devices in home networks has increased dramatically. Whenever a new device connects to the network, it must be quickly managed and secured using the relevant security mechanism or QoS policy. Thus a key challenge is to distinguish between IoT and NoT devices in a matter of minutes. Unfortunately, there is no clear indication of whether a device in a network is an IoT. In this paper, we propose different classifiers that identify a device as IoT or non-IoT, in a short time scale, and with high accuracy. Our classifiers were constructed using machine learning techniques on a seen (training) dataset and were tested on an unseen (test) dataset. They successfully classified devices that were not in the seen dataset with accuracy above 95%. The first classifier is a logistic regression classifier based on traffic features. The second classifier is based on features we retrieve from DHCP packets. Finally, we present a unified classifier that leverages the advantages of the other two classifiers.