CloudPilot: Flow Acceleration in the Cloud. Kfir Toledo, David Breitgand, Dean Lorenz, Isaac Keslassy
TCP-split proxies have been previously studied as an efficient mechanism to improve the rate of connections with large round trip times. These works focused on improving a single flow. In this paper, we investigate how strategically deploying TCP-split proxies in the cloud can improve the performance of geo-distributed applications entailing multiple flows interconnecting globally-distributed sources and destinations using different communication patterns, and being subject to budget limitations. We present CloudPilot, a Kubernetes-based system that measures communication parameters across different cloud regions, and uses these measurements to deploy cloud proxies in optimized locations on multiple cloud providers. To this end, we model cloud proxy acceleration and define a novel cloud-proxy placement problem. Since this problem is NP-Hard, we suggest a few efficient heuristics to solve it. Finally, we find that our cloud-proxy optimization can improve performance by an average of 3.6× for four different use cases.
Private Service Connect for Internal Load Balancing. Noam Lampert, Danny Raz
The rapid migration of organizations and businesses to the cloud introduces many new networking related challenges. In this talk we concentrate on service migration, considering both performance and trust. Customers want to dynamically partition cloud resources into groups, where each one has a varying level of isolation and independence, and at the same time, a certain limited form of communication between the groups is allowed.
A classic example is when different teams within the same organization wish to deploy services that interact with each other, but each team wants to control its own resources, potentially including the network configuration. Another, more complex example, is when a service provider and many of its customers are residing within the same public cloud, and are expecting to leverage client-service private and performant communication within this shared environment.
One particular challenge in these scenarios is IP space overlaps. When a service provider and a customer are on the same cloud, their private IP address space in the cloud may overlap, yet, it does not make sense to require them to communicate via unprotected public IP traffic. Another challenge is managing partial trust. How can the service provider ensure that a customer can only access the frontend of the provided service, but not other entities on the network? How can firewall rules be enacted and trusted in such an environment?
We describe several relevant GCP solutions and focus in more detail on Private Service Connect for Internal Load Balancing (PSC for ILB).
Using a load balancer service on VMs running on Kubernetes. Ram Lavi
Over the last year, KubeVirt and MetalLB have shown to be a powerful duo in order to support fault-tolerant access to an application on virtual machines through an external IP address.
As a cluster administrator using an on-prem cluster without a network load-balancer, now it's possible to use MetalLB operator to provide load-balancing capabilities (with Services of type `LoadBalancer`) to virtual machines.
In the presentation I will show how this can be done, using upstream KubeVirt on a Kubernetes cluster.
Network Observability - a journey from eBPF to network topologies and insights. Eran Raichstein
In this talk we will describe work between IBM Research and Red Hat in the space of Network Observability. We will highlight new open-source projects in this space and present existing and future work to build topologies and network insights from flow-logs. We will describe the relevant underline technologies such as eBPF, IPFIX, pipelines and transformations of flow-logs into time-series metrics. We will then demonstrate an end-to-end scenario over Red Hat OpenShift using the OVN-Kubernetes CNI and finalize with call for action and collaboration.
PRISM based Transport: How Networks can Boost QoS for Advanced Video Services? Neta Rozen-Schiff, Amit Navon, Leon Bruckman, Itzcak Pechtalt
Future applications and services will challenge the network infrastructure with unprecedented demands for high bandwidth, low latency and reliable communication.
Moreover, popularity of applications requiring several functionalities such as control, telemetry, video and audio, each with its own requirements, is constantly increasing. For example, an interactive video service control requires low bandwidth and low latency while its $4$K video flows require high bandwidth and moderate latency.
Today's transport protocols do not address heterogeneous requirements. Instead, only bandwidth allocation is provided by congestion control schemes, which support up to two priorities. This limitation impairs Quality of Service (QoS) since it does not satisfy flow latency and bandwidth requirements in parallel. The overall QoS assurance is mainly handled by the application layer that usually reduce the video stream quality.
We present PRISM, a new transport protocol on top of IP, which provides per flow granular and dynamic quality of service. PRISM applies state-of-the-art congestion control schemes (such as BBR, Proteus, Cubic) to allocate bandwidth. In addition, PRISM addresses reliability and latency requirements and couples flows from the same application together, enabling inter-flow synchronization.
In experiments with a kernel implementation on emulated networks, dense camera grid applications' requirements were fulfilled by using PRISM, while the current transport protocols failed to satisfy all requirements.
Furthermore, PRISM reduced establishment latency by a factor of up to $1000$ compared to multiple TCP.
DRAWN APART : A Device Identification Technique based on Remote GPU Fingerprinting. Tomer Laor, Naif Mehanna, Antonin Durey, Vitaly Dyadyuk, Pierre Laperdrix, Clémentine Maurice, Yossi Oren, Romain Rouvoy, Walter Rudametkin and Yuval Yarom
Browser fingerprinting aims to identify users or their devices, through scripts that execute in the users' browser and collect information on software or hardware characteristics. It is used to track users or as an additional means of identification to improve security. Fingerprinting techniques have one significant limitation: they are unable to track individual users for an extended duration. This happens because browser fingerprints evolve over time, and these evolutions ultimately cause a fingerprint to be confused with those from other devices sharing similar hardware and software.
In this talk we will present DrawnApart, a new GPU fingerprinting technique. DrawnApart takes advantage of variations in speed among the multiple execution units that comprise a GPU that serve as a reliable and robust device signature, which can be collected using unprivileged JavaScript. DrawnApart is the first work that explores the manufacturing differences between identical GPUs and the first to exploit these differences in a privacy context.
We show that DrawnApart is effective in distinguishing devices with similar hardware and software that are indistinguishable by current state-of-the-art fingerprinting algorithms. We also show that integrating a one-shot version of DrawnApart into a state-of-the-art browser fingerprint tracking algorithm provides a boost of up to 67% to the median tracking duration. We verify our technique through a large-scale data collection from over 2,500 devices over a period of several months.
Twilight: A Differentially Private Payment Channel Network. Maya Dotan, Saar Tochner, Aviv Zohar, and Yossi Gilad
Payment channel networks (PCNs) provide a faster and cheaper alternative to transactions recorded on the blockchain. Clients can trustlessly establish payment channels with relays by locking coins and then send signed payments that shift coin balances over the network's channels. Although payments are never published, anyone can track a client's payment by monitoring changes in coin balances over the network's channels. We present Twilight, the first PCN that provides a rigorous differential privacy guarantee to its users.
Relays in Twilight run a noisy payment processing mechanism that hides the payments they carry. This mechanism increases the relay's cost, so Twilight combats selfish relays that wish to avoid it using a trusted execution environment (TEE) that ensures they follow its protocol.
The TEE does not store the channel's state, which minimizes the trusted computing base. Crucially, Twilight ensures that even if a relay breaks the TEE's security, it cannot break the integrity of the PCN. We analyze Twilight in terms of privacy and cost and study the trade-off between them. We implement Twilight using Intel's SGX framework and evaluate its performance using relays deployed on two continents. We show that a route consisting of 4 relays handles 820 payments/sec.
AP2Vec: an Unsupervised Approach for BGP Hijacking Detection. Tal Shapira, Yuval Shavitt
BGP hijack attacks deflect traffic between endpoints through the attacker network, leading to man-in-the-middle attacks. Thus its detection is an important security challenge. In this paper, we introduce a novel approach for BGP hijacking detection that is based on the observation that during a hijack attack, the functional roles of ASNs along the route change. To identify a functional change, we build on previous work that embeds ASNs to vectors based on BGP routing announcements and embed each IP address prefix (AP) to a vector representing its latent characteristics, we call it AP2Vec. Then, we compare the embedding of a new route with the AP embedding that is based on the old routes to identify large differences. We compare our unsupervised approach to several other new and previous approaches and show that it strikes the best balance between a high detection rate of hijack events and a low number of flagged events. In particular, for a two-hour route collection with 10-90,000 route changes, our algorithm typically flags 1-11 suspected events (0.01-0.05% FP). Our algorithm also detected most of the previously published hijack events.
Hardware SYN Attack Protection For High Performance Load Balancers. Reuven Cohen; Matty Kadosh; Alan Lo; Qasem Sayah
SYN flooding is a simple and effective denial-of-service attack, in which an attacker sends many SYN requests to a target's server in an attempt to consume server resources and make it unresponsive to legitimate traffic. While SYN attacks have traditionally targeted web servers, they are also known to be very harmful to intermediate cloud devices, and in particular to stateful load balancers (LBs). We propose LB schemes that guarantee high throughput of one million connections per second, while supporting a high pool update rate without breaking connections, and fighting against a high rate SYN attack, of up to 10 million fake SYNs per second.
Practical Cross-Layer Radio Frequency-Based Authentication Scheme for Internet of Things. Arie Haenel , Yoram Haddad, Maryline Laurent and Zonghua Zhang
The Internet of Things world is in need of practical solutions for its security. Existing security mechanisms for IoT are mostly not implemented due to complexity, budget and energy saving issues. This is especially true for IoT devices that are battery powered and should be cost effective to be deployed massively in the field.
In this work, we propose a new cross-layer approach combining existing authentication protocols and existing Physical Layer Radio Frequency Fingerprinting technologies to provide hybrid authentication mechanisms that are practically proved efficient on the field. Even if several Radio Frequency Fingerprinting methods have been proposed so far, as a support for multi-factor authentication or even on their own, practical solutions are still a challenge. The accuracy results achieved with even the best systems using expensive equipment are still not sufficient on real life systems. Our approach proposes a hybrid protocol, that can save energy and computation time on the IoT devices side, proportionally to the accuracy of the Radio Frequency Fingerprinting used, having a measurable benefit while keeping an acceptable security level. We implemented a full system operating in real time and achieved an accuracy of 99.8% for an additional cost of energy leading to a decrease of only ~20% in battery life.
Preventing the Flood: Incentive-Based Collaborative Mitigation for DRDoS Attacks. Matan Sabga, Anat Bremler-Barr
Distributed denial of service (DDoS) attacks, especially distributed reflection denial of service attacks (DRDoS), have increased dramatically in frequency and volume in recent years. Such attacks are possible due to the attacker’s ability to spoof the source address of IP packets. Since the early days of the internet, authenticating the IP source address has remained unresolved in the real world. Although there are many methods available to eliminate source spoofing, they are not widely used, primarily due to a lack of economic incentives.
We propose a collaborative on-demand route-based defense technique (CORB) to offer efficient DDoS mitigation as a paid-for-service, and efficiently assuage reflector attacks before they reach the reflectors and flood the victim. The technique uses scrubbing facilities located across the internet at internet service providers (ISPs) and internet exchange points (IXPs).
By transmitting a small amount of data based on border gateway protocol (BGP) information from the victim to the scrubbing facilities, we can filter out the attack without any false-positive cases. For example, the data can be sent using DOTS, a new signaling DDoS protocol that was standardized by the IETF. CORB filters the attack before it is amplified by the reflector, thereby reducing the overall cost of the attack. This provides a win-win financial situation for the victim and the scrubbing facilities that provide the service.
We demonstrate the value of CORB by simulating a Memcached DRDoS attack using real-life data. Our evaluation found that deploying CORB on scrubbing facilities at approximately 40 autonomous systems blocks 90% of the attack and can reduce the mitigation cost by 85%.
Compressing Distributed Network Sketches with Traffic-Aware Summaries. Dor Harris, Arik Rinberg, and Ori Rottenstreich
Network measurements are important for identifying congestion, DDoS attacks, and more. To support real-time analytics, stream ingestion is performed jointly by multiple nodes, each observing part of the traffic, periodically reporting its measurements to a single centralized server that aggregates them. To avoid communication congestion, each node reports a compressed version of its collected measurements. Traditionally, nodes symmetrically report summaries of the same size computed on their data. We explain that to maximize the accuracy of the joint measurement, nodes should imply various compression ratios on their measurements based on the amount of traffic observed by each node. We illustrate the approach for three common sketches: The Count-Min sketch (CM), which estimates flow sizes as well as for the K-minimum-values (KMV) sketch and the HyperLogLog (HLL), which both estimate the number of distinct flows. For each sketch, we compute node compression ratios based on the traffic distribution. In general, this is done with a single round of communication with the central server, after which the compression ratio for each node can be computed. We perform extensive simulations for the sketches and analytically show that, under real-world scenarios, our sketches send smaller summaries than traditional ones while retaining similar error bounds.
Distributed Shared State Abstractions for Programmable Switches. Lior Zeno, Dan R. K. Ports, Jacob Nelson, Daehyeok Kim, Shir Landau Feibish, Idit Keidar, Arik Rinberg, Alon Rashelbach, Igor De-Paula, and Mark Silberstein
We design and evaluate SwiSh, a distributed shared state management layer for data-plane P4 programs. SwiSh enables running scalable stateful distributed network functions on programmable switches entirely in the data-plane. We explore several schemes to build a shared variable abstraction, which differ in consistency, performance, and in-switch implementation complexity. We introduce the novel Strong Delayed-Writes (SDW) protocol which offers consistent snapshots of shared data-plane objects with semantics known as r-relaxed strong linearizability, enabling implementation of distributed concurrent sketches with precise error bounds.
We implement strong, eventual, and SDW consistency protocols in Tofino switches, and compare their performance in microbenchmarks and three realistic network functions, NAT, DDoS detector, and rate limiter. Our results show that the distributed state management in the data plane is practical, and outperforms centralized solutions by up to four orders of magnitude in update throughput and replication latency.
SQUAD: Combining Sketching and Sampling Is Better than Either for Per-item Quantile Estimation. Rana Shahout, Roy Friedman, Ran Ben Basat
Stream monitoring is fundamental in many data stream applications, such as financial data trackers, security, anomaly detection, and load balancing. In that respect, quantiles are of particular interest, as they often capture the user's utility. For example, if a video connection has high tail latency, the perceived quality will suffer, even if the average and median latencies are low.
In this work, we consider the problem of approximating the per-item quantiles. Elements in our stream are (ID, latency) tuples, and we wish to track the latency quantiles for each ID.
Existing quantile sketches are designed for a single number stream. While one could allocate a separate sketch instance for each ID, this may require an infeasible amount of memory. Instead, we consider tracking the quantiles for the heavy hitters, which are often considered particularly important, without knowing them beforehand.
We first present a simple sampling algorithm that serves as a benchmark. Then, we design an algorithm that augments a quantile sketch within each entry of a heavy hitter algorithm, resulting in similar space complexity but with a deterministic error guarantee. Finally, we present SQUAD, a method that combines sampling and sketching while improving the asymptotic space complexity. Intuitively, SQUAD uses a background sampling process to capture the behaviour of the latencies of an item before it is allocated with a sketch, thereby allowing us to use fewer samples and sketches. Our solutions are rigorously analyzed, and we demonstrate the superiority of our approach using extensive simulations.
Using Internet Measurements to Map the 2022 Ukrainian Refugee Crisis. Tal Mizrahi and Jose Yallouz
The conflict in Ukraine, starting in February 2022, began the largest refugee crisis in decades, with millions of Ukrainian refugees crossing the border to neighboring countries and millions of others forced to move within the country. In this paper we present an insight into how Internet measurements can be used to analyze the refugee crisis. Based on preliminary data from the first two months of the war we analyze how measurement data indicates the trends in the flow of refugees from Ukraine to its neighboring countries, and onward to other countries. We believe that these insights can greatly contribute to the ongoing international effort to map the flow of refugees in order to aid and protect them.
A Few Shots Traffic Classification with mini-FlowPic Augmentations. Eyal Horowicz, Tal Shapira, Yuval Shavitt
Internet traffic classification has been intensively studied over the past decade due to its importance for traffic engineering and cyber security. One of the best solutions to several traffic classification problems is the FlowPic approach, where histograms of packet sizes in consecutive time slices are transformed into a picture that is fed into a Convolution Neural Network (CNN) model for classification.
However, CNNs (and the FlowPic approach included) require a relatively large labeled flow dataset, which is not always easy to obtain. In this paper, we show that we can overcome this obstacle by replacing the large labeled dataset with a few samples of each class and by using augmentations in order to inflate the number of training samples. We show that common picture augmentation techniques can help, but accuracy improves further when introducing augmentation techniques that mimic network behavior such as changes in the RTT.
Finally, we show that we can replace the large FlowPics suggested in the past with much smaller mini-FlowPics and achieve two advantages: improved model performance and easier engineering. Interestingly, this even improves accuracy in some cases.
Scaling Open vSwitch with a Computational Cache. Alon Rashelbach, Ori Rottenstreich, Mark Silberstein
Open vSwitch (OVS) is a widely used open-source virtual switch implementation. In this work, we seek to scale up OVS to support hundreds of thousands of OpenFlow rules by accelerating the core component of its data-path - the packet classification mechanism. To do so we use NuevoMatch, a recent algorithm that uses neural network inference to match packets, and promises significant scalability and performance benefits. We overcome the primary algorithmic challenge of the slow rule update rate in the vanilla NuevoMatch, speeding it up by over three orders of magnitude. This improvement enables two design options to integrate NuevoMatch with OVS: (1) using it as an extra caching layer in front of OVS’s megaflow cache, and (2) using it to completely replace OVS’s datapath while performing classification directly on OpenFlow rules, and obviating control-path upcalls. Our comprehensive evaluation on real-world packet traces and ClassBench rules demonstrates the geometric mean speedups of 1.9× and 12.3× for the first and second designs, respectively, for 500K rules, with the latter also supporting up to 60K OpenFlow rule updates/second, by far exceeding the original OVS.
WeRLman: To Tackle Whale (Transactions), Go Deep (RL). Roi Bar-Zur, Ameer Abu-Hanna, Ittay Eyal, Aviv Tamar
The security of proof-of-work blockchain protocols critically relies on incentives. Their operators, called miners, receive rewards for creating blocks containing user-generated transactions. Each block rewards its creator with newly minted tokens and with transaction fees paid by the users. The protocol stability is violated if any of the miners surpasses a threshold ratio of the computational power; she is then motivated to deviate with selfish mining and increase her rewards. Previous analyses of selfish mining assumed constant rewards. But with statistics from operational systems, we show that there are occasional whales: blocks with exceptional rewards. Modeling this behavior implies a state-space that grows exponentially with the parameters, becoming prohibitively large for existing analysis tools.
We present the WeRLman framework to analyze such models. WeRLman uses deep Reinforcement Learning (RL) and employs novel variance reduction techniques. Evaluating WeRLman against models we can accurately solve demonstrates it achieves unprecedented accuracy in deep RL for blockchain. We use WeRLman to analyze the incentives of a rational miner in various settings and upper-bound the security threshold of Bitcoin-like blockchains. The previously known bound, with constant rewards, stands at 25%. We show that considering whale transactions reduces this threshold considerably. Considering Bitcoin historical fees and its future minting policy, its threshold for deviation will drop to 20% in 10 years, and to 12% in 30 years. With recent fees from the Ethereum smart-contract platform, the threshold drops to 17%. These are below the common sizes of large miners.
A Multi-RPL Adversary Identification Scheme in IoT Networks. Aviram Zilberman, Ariel Stulman, Amit Dvir
Cyber-threat protection is one of the most challenging research branches of Internet-of-Things (IoT).
With exponential increase of tiny connected devices pushing personal data to the Internet, the battle between friend and foe intensifies. Unfortunately, contemporary IoT devices often offer very limited security features, laying themselves wide open to new, sophisticated attacks, inhibiting the expected global adoption of IoT technologies. Moreover, existing prevention and mitigation techniques and intrusion detection systems handle anomalies of attack rather than the attack itself while using a significant amount of the network resources.
RPL, the de-facto routing protocol for low-power and lossy networks, proposes minimal security features that cannot handle internal attacks. Hence, in this paper we propose SPRINKLER, that identifies the specific thing that is under attacked by an adversarial Man-in-The-Middle. SPRINKLER uses the multi-instance feature of RPL to secure the network from the adversary. The proposed solution is specifically designed to be applicable in existing networks by adhering to two basic principles: it only uses pre-existing standard routing protocols; and does not rely on a centralized or trusted third party node such as a certificate authority. All information must be gleaned by each node using only primitives which already exist in the underlying communication protocol excluding any training dataset. Using simulations we show that using SPRINKLER adds minimal maintenance and energy expenditure, while pinpointing deterministically the attacker in the network.
Tornadoes In The Cloud: Worst-Case Attacks on Distributed Resources Systems. Jhonatan Tavori and Hanoch Levy
Geographically distributed cloud networks are used by a variety of applications and services worldwide. As the demand for these services increases, their data centers form an attractive target for malicious attackers, aiming at harming the services. In this study we address sophisticated attackers who aim at causing maximal-damage to the service.
A worst-case (damage-maximizing) attack is an attack which minimizes the revenue of the system operator, due to disrupting the users from being served. A sophisticated attacker needs to decide how many attacking agents should be launched at each of the systems regions, in order to inflict maximal damage.
We characterize and analyze damage-maximization strategies for a number of attacks including deterministic attack and concurrent stochastic agents attack. We also address user-migration defense, allowing to dynamically migrate demands among regions, and we provide efficient algorithms flor deriving worst-case attacks given a system with arbitrary placement and demands.
In an ongoing work the analysis is extended to address queueing-networks whose performance, mainly delay, depends on the flow and congestion of their traffic. These include computer networks (the Internet) and routing (e.g., Waze) of vehicles on vehicular networks.
The results form a basis for devising network planning and resource allocation strategies aiming at minimizing attack damages.
The talk is based on papers presented in IEEE INFOCOM 2021 and SIGMETRICS MAMA 2022.
Online Disengagement. Yossi Azar, Chay Machluf, Boaz Patt-Shamir and Noam Touitou
Motivated by placement of jobs in physical machines, we introduce and analyze the problem of online recoloring, or online disengagement. In this problem, we are given a set of n weighted vertices and an initial k-coloring of the vertices (vertices represent jobs, and colors represent physical machines). Edges, representing conflicts between jobs, arrive in an online fashion. Following each edge arrival, the algorithm must output a proper k-coloring of the vertices. Each recoloring of a vertex costs the weight of that vertex, and the goal is to minimize the overall cost.
We consider a couple of polynomially-solvable coloring variants. First, for 2-coloring bipartite graphs we present an O(log n)-competitive deterministic algorithm and an Omega(log n) lower bound on the competitive ratio of randomized algorithms. Second, for (Delta+1)-coloring we present tight bounds of Theta(Delta) and Theta(log Delta) on the competitive ratios of deterministic and randomized algorithms, respectively (where Delta denotes the maximum degree). All algorithms are applicable in the weighted case, and all lower bounds hold in the restricted unweighted case.
The Benefits of General-Purpose On-NIC Memory. Boris Pismenny, Liran Liss, Adam Morrison, and Dan Tsafrir
We propose to use the small, newly available on-NIC memory (“nicmem”) to keep pace with the rapidly increasing performance of NICs. We motivate our proposal by accelerating two types of workload classes: NFV and key-value stores. As NFV workloads frequently operate on headers—rather than data—of incoming packets, we introduce a new packet-processing architecture that splits between the two, keeping the data on nicmem when possible and thus reducing PCIe traffic, memory bandwidth, and CPU processing time. Our approach consequently shortens NFV latency by up to 23% and increases its throughput by up to 19%. Similarly, be- cause key-value stores commonly exhibit skewed distributions, we introduce a new network stack mechanism that lets applications keep frequently accessed items on nicmem. Our design shortens memcached latency by up to 43% and increases its throughput by up to 80%.
Decision-making support for an autonomous SDN orchestrator. Guy Saadon, Yoram Haddad, Michael Dreyfuss, and Noemie Simoni
The deployment of 5G and IoT networks has already started. Considering their growing complexity, such networks require complete autonomy, i.e., without human intervention. For this purpose, an orchestration application has been introduced in the field of network management. This application automates, coordinates, and manages network resources, supporting countless demands for services and applications. In this context, complex rule-driven and artificial-intelligence based methods are investigated to help orchestrators make accurate decisions in becoming a zero-touch network. Thus, a decision-making process at the orchestrator level is important. However, rules defined at the level of the orchestrator can generate delays and yield loops, contradictions, redundancies, or even deadlocks in a network. In this study, a novel mathematical function, located at the orchestrator of the network resources, is presented to support decision-making in selecting the most relevant rule to apply under a contention scenario. A network requires permanent monitoring; thus, if some rules appear to be ineffective or dangerous for network autonomy in certain cases, they will be eliminated. Under this function, different applicable rules are compared, and the most efficient rule for optimally allocating the required resources is selected. Based on a numerical analysis, we show how the usage of this decision process creates a more resilient and autonomous network.
Heterogeneous SDN Controller Placement Problem - The Wi-Fi and 4G LTE-U Case. Aviram Zilberman, Yoram Haddad, Sefi Erlich, Yossi Peretz, Amit Dvir
The widespread deployment of 5G wireless networks has brought about a radical change in the nature of mobile data consumption. It requires the deployment of Long Term Evolution to the unlicensed spectrum (LTE-U) to alleviate the shortage of available licensed spectrum while Wi-Fi is the most dominant radio access technology in the unlicensed spectrum. This has led to suggestions concerning the coexistence of multi radio access technologies such as Wi-Fi and LTE-U. Clearly, Software Defined Networks (SDN) can utilize the LTE-U radio bandwidth efficiently. The SDN paradigm decouples the network's control logic from the underlying routers and switches, which promotes centralization of network control. The controller placement problem is crucial to SouthBound interface optimization and involves the location of the controllers, their number and the assignment function of switches to controllers. In this paper, a novel strategy is presented to address the wireless controller placement problem, which improves the throughput, link failure probability and the transparency on the SouthBound interface in cases where hybrid providers choose either LTE-U or Wi-Fi as link layer technologies. The problem of determining the placement of LTE-U and Wi-Fi based controllers is modeled and two heuristic solutions are proposed. Simulations show that the proposed algorithms provide a fast and efficient solution.