Next Talks:


Date and Time:

Speaker:

Abstract:

All Talks:

Offset-Symmetric Gaussians for Differential Privacy

Date and Time: 19 November 2021, 2:00 PM (AEDT).

Speaker: Dr Mehdi Korki (Swinburn University)

Abstract: Differential privacy (DP) is a widely-used statistical framework for protecting the privacy of individuals in datasets when they are to be shared with the public, used in training machine learning algorithms, or in responding to adaptive queries from adversarial data analysts. Notable applications of DP include implementations by Google, Microsoft, Apple, and in deep learning, federated learning, and the 2020 US Census. In a typical implementation of DP, the privacy-preserving mechanism outputs a noisy version of the function of interest (i.e., the query) evaluated at the input dataset. Two most commonly used noise distributions are zero-mean Laplace and Gaussian.

In this talk, first I will give an overview of the DP and then I will present the limitations of the two mechanisms, namely Laplace and Gaussian. Finally, I will introduce a new distribution for use in DP that is based on the Gaussian distribution, but has improved privacy performance. This novel mechanism is called offset-symmetric Gaussian tail (OSGT) which is obtained through using the normalized tails of two symmetric Gaussians around zero. Consequently, it can still have sub-Gaussian tail and lend itself to analytical derivations. It is shown that at the same variance, the OSGT mechanism can perform better than Gaussian mechanism in terms of approximate DP and zero-concentrated differential privacy (zCDP). The discrete version of OSGT mechanism can also be easily developed, thanks to its Gaussian origin, which can be used in Census applications.

(Joint work with Prof. Parastoo Sadeghi at The University of New South Wales, Canberra)

(Slides)

Wireless Networked Control for Industry 4.0

Date and Time: 5 November 2021, 2:00 PM (AEDT).

Speaker: Dr Wanchun Liu (University of Sydney)

Abstract: Industry 4.0 is the automation of traditional manufacturing and industrial processes through customised and flexible mass production. Cutting the communication cables in conventional factories will be a game-changer: for automatic control, Industry 4.0 will require large-scale, interconnected deployment of massive spatially distributed industrial devices such as sensors, actuators, robots, and controllers. With low-cost and scalable deployment capabilities, wireless networked control will play a key role in many industrial applications, such as smart manufacturing, industrial automation systems, e-commerce warehouses, and smart grids.

Building a large-scale wireless networked control system (WNCS) for Industry 4.0 faces many significant technical challenges. This is due to limited spectrum and power, strong randomness and uncertainty of wireless channels caused by time-varying channel gains, and interference, leading to random delay and packet loss, severely affecting the stability of WNCSs. In the talk, I will first introduce WNCS and then discuss the challenges, solutions and future research directions.

Wanchun_AUSCTW_2021_brief.pdf

Low latency coding for packet erasure channels

Date and Time: 22 October 2021, 2:00 PM (AEDT).

Speaker: Prof. Margreta Kuijper (University of Melbourne)

Abstract: Packet channels are subject to packet losses and these can be modeled as packet erasures where we know the lost packet's location but not its content. Channel coding provides a way to recover much of this content and thus protect against the impact of packet losses. The straightforward choice is then for MDS block codes. When coding against burst erasure patterns, a larger playing field is provided by the class of convolutional codes and it has been shown that these can also be optimal.

In many modern interactive applications, such as interactive video and telesurgery, there are requirements to optimize latency, throughput and error rate. Since 2004 two different approaches to the coding of packets have emerged in the literature. In this talk I review some of the results obtained in each of these approaches, including my own. In all of the approaches the decoding delay time is explicitly involved. I will then connect this with new work in the very recent 2021 literature. This area still attracts a lot of attention because of its relevance to ULLC, ultra-reliable and low-latency communications.

Interference Management with Discrete Signaling and Treating Interference as Noise

Date and Time: 24 September 2021, 2:00 PM (AEST).

Speaker: Dr Min Qiu (UNSW)

Abstract: In this talk, I will revisit the two-user Gaussian interference channel (G-IC), which is the most fundamental channel for studying interference in wireless networks where multiple transmissions share and compete for the same medium resource. Different from most works that assume Gaussian signaling, we focus on discrete input signaling and treating interference as noise (TIN), i.e., no successive interference cancellation (SIC), for practical relevance. First, we study the corresponding deterministic interference channel (D-IC) and construct coding schemes to achieve the entire capacity region of the D-IC under TIN. These schemes are then systematically translated into multi-layer superposition coding schemes based on purely discrete inputs for the G-IC. We prove that the proposed scheme is able to achieve the entire capacity region to within a constant gap for all channel parameters. To the best of our knowledge, this is the first constant-gap result under purely discrete signaling and TIN for the entire capacity region and all the interference regimes. An important implication of our results is that, practical coded modulations are actually superior to Gaussian signaling for handling interference when TIN is employed. (Slides)

Scheduling links in mm-wave IAB Networks

Date and Time: 10 September 2021, 2:00 PM (AEST).

Speaker: Prof. Stephen Hanly (Macquarie University)

Abstract: Scheduling in wireless networks is known to be a hard problem. Back-pressure algorithms were developed by Tassiulas and co-workers in the 1990s and these algorithms require the computation of a maximum weighted independent set in a graph which is NP Hard in general. In recent work, we have considered the problem of scheduling links in mmWave integrated access and backhaul (IAB) networks which can be modelled graphically by a tree. At the same time, there is an interesting twist in terms of constraints: we need to consider constraints on the number of RF chains in the IAB nodes as well as the half duplex constraints for the backhaul links. In this talk, we formulate the scheduling problem for IAB networks and present some distributed scheduling algorithms - one based on back-pressure and a more distributed approach that we call local max-weight. We also introduce a highly distributed slot reservation scheme that has other advantages and can provide end-to-end service guarantees to users. Overall, we exploit the tree structure to obtain capacity-achieving algorithms where the complexity does not grow exponentially in the size of the network. This talk is based on joint work with Swaroop Gopalam and Phil Whiting.

Channel Code Design for Beyond 5G: Primitive Rateless Codes

Date and Time: 27 August 2021, 2:00 PM (AEST).

Speaker: Dr. Mahyar Shirvanimoghaddam (University of Sydney)

Abstract: Designing and optimizing short block length codes have been recently attracted for being implemented on memory or power-constrained devices, mainly in the context of the Internet of Things applications and services. The rate matching procedure is crucial to support various requirements and be able to adapt to varying channel conditions. This becomes even more important for modern wireless systems, like the fifth generation (5G) of mobile communications standard that has established a framework to include services with a diverse range of requirements, such as ultra-reliable and low-latency communications (URLLC) and massive machine-type communications (mMTC), in addition to the traditional enhanced mobile broadband (eMBB). Existing rate compatible (RC) codes are mainly constructed using puncturing and extending, which are shown to be sub-optimal for short block lengths. Designing RC codes for short messages that support bit-level granularity of the codeword size and maintain a large minimum Hamming weight at various rates is still open.

In this talk, I will introduce primitive rateless (PR) codes, which are mainly characterized by a primitive polynomial of degree k over GF(2). We will see that PR codes can be represented by using 1) linear-feedback shift-registers (LFSR) and 2) Boolean functions. We characterize the average Hamming weight distribution of PR codes and develop a lower bound on the minimum Hamming weight, which is very close to the Gilbert-Varshamov bound. An interesting result is that for any k, there exists at least one PR code that can meet this bound. Simulation results show that the PR code with a properly chosen primitive polynomial can achieve a similar block error rate (BLER) performance as the eBCH code counterpart. This is because while a PR codes might have a slightly lower minimum Hamming weight than the eBCH code, it has a lower number of low-weight codewords. PR codes can be designed for any message length and arbitrary rate and perform very close to finite block length bounds. They are rate-compatible and have a very simple encoding structure, unlike most rate-compatible codes designed based on puncturing a low-rate mother code, with mostly sub-optimal performance at various rates. (YouTube)

Millimeter-Wave Beam Alignment and Tracking: Fundamental Limits and Design Insights

Date and Time: 13 August 2021, 2:00 PM (AEST).

Speaker: Prof. Min Li (Zhejiang University)

Abstract: Millimeter-Wave (mmWave) communication is one of the important means to expand the system capacity of B5G and 6G cellular networks, thanks to the abundant frequency bands in the range of 30-300 GHz. Swift and accurate alignment of transmit and receive beams is one of the fundamental design challenges to enable reliable outdoor mmWave communication. In this talk, I will present our recent progress on new beam alignment and tracking schemes, discuss their performance limits and highlight some design insights on practical implementation. Some future research topics will also be briefly discussed, including learning-based beam alignment and tracking, cooperative mmWave networking, and integrated sensing and communication in mmWave.


(Joint work with Prof. Stephen Hanly, Prof. Philip Whiting and Prof. Iain Collings at MQ and Prof. Chunshan Liu at Hangzhou Dianzi University.)


(Slides of the talk)

Learn to Reflect and to Beamform Without Explicit Channel Estimation

Date and Time: 4 June 2021, 10:00 AM (AEST).

Speaker: Prof. Wei Yu (University of Toronto)

Abstract: Intelligent reflecting surface (IRS), which consists of a large number of tunable reflective elements, is capable of enhancing the wireless propagation environment in a cellular network by intelligently reflecting the electromagnetic waves from the base-station (BS) toward the users. The optimal tuning of the phase shifters at the IRS is, however, a challenging problem, because due to the passive nature of reflective elements, it is difficult to directly measure the channels between the IRS, the BS, and the users. Instead of following the traditional approach of first estimating the channels then optimizing the system parameters, this work advocates a machine learning approach capable of directly optimizing both the beamformers at the BS and the reflective coefficients at the IRS based on a system objective. This is achieved by using a deep neural network to parameterize the mapping from the received pilots to an optimized system configuration, and by adopting a permutation invariant/equivariant graph neural network (GNN) architecture to capture the interactions among the different users in the cellular network. We show that the proposed approach is generalizable, can be interpreted, and can efficiently learn to maximize the system objective from a much fewer number of pilots.

(Joint work with Tao Jiang and Hei Victor Cheng)


Proving and Disproving Information Inequalities

Date and Time: 21 May 2021, 2:00 PM (AEST).

Speaker: Dr Siu Wai Ho (University of Adelaide)

Abstract: Proving or disproving an information inequality is a crucial step in establishing the converse results in coding theorems. However, an information inequality involving more than a few random variables is difficult to be proved or disproved manually. In 1997, Yeung developed a framework that uses linear programming for verifying linear information inequalities. Under the framework, this talk considers a few other problems that can be solved by using Lagrange duality and convex approximation. We will demonstrate how linear programming can be used to find an analytic proof of an information inequality or an analytic counterexample to disprove it if the inequality is not true in general. The way to automatically find a shortest proof or a smallest counterexample is explored. When a given information inequality cannot be proved, the sufficient conditions for a counterexample to disprove the information inequality are found by linear programming. Based on these results, a free web service, namely AITIP, has been developed. We will show some examples to demonstrate the usage of AITIP. (YouTube)


High-speed underwater acoustic communications – Challenges and solutions

Date and Time: 7 May 2021, 2:00 PM (AEST).

Speaker: Prof. Yue Rong (Curtin University)

Abstract: The underwater acoustic (UA) channel, especially the shallow water UA channel, is one of the most challenging channels for wireless communication, due to its extremely limited bandwidth, severe fading, strong multipath interference, and significant Doppler shifts. In this talk, we first introduce the characteristics of UA channels. Then, we discuss the challenges in the communication system design. After this, we present some of our research outcomes in high-speed single-carrier and OFDM UA communications, including both the transmitter and receiver design. Research in underwater communication is interdisciplinary in nature, which involves multiple areas including acoustics, electronics, signal processing, and communication technology. We share our experience and lessons learned over the years and hope that this talk will inspire researchers to work in the challenging area of UA communication. (YouTube)

3D Tomographic Retrieval of Rain Field Using LEO Satellite Signals

Date and Time: 23 Apr. 2021, 2:00 PM (AEST).

Speaker: Prof. David Huang (University of Western Australia)

Abstract: Several companies have planned to deploy low earth orbit (LEO) satellite constellations to provide global Internet services. For example, SpaceX plans to deploy 42,000 satellites with about 1,400 currently operational. Similar to using X-ray to carry out human-body CT scans, using the microwave signals from those LEO satellites, it is possible to achieve three dimensional tomographic retrieval of the atmosphere, particularly the rain field. In this talk, research progress in this regard will be reported. Retrieval of things beyond rain, such as clouds and flying objects, will also be discussed. (YouTube, Slides)

Physical Layer Design of Terahertz Communications for 6G Era

Date and Time: 9 Apr. 2021, 2:00 PM (AEST).

Speaker: Assoc/Prof. Nan Yang (Australian National University)

Abstract: Terahertz communication (THzCom) is an emerging research area offering many interesting and challenging research problems for telecommunications scientists, engineers, and industry professionals. By exploiting the inherent advantages of the THz frequency band ranging from 0.1 to 10 THz, THzCom is widely envisioned as one of the key enablers towards ubiquitous wireless communications for the 6G era, with great potential to realise ultra-high user data rates in the order of terabits-per-second and accommodating a massive number of connected devices. Possible applications of the resulting THzCom networks include ultra-high-definition content streaming among mobile devices and wireless massive-core computing architectures. In this talk, I will first give a general introduction to THzCom where THz-band devices and THz-band channel models will be briefly reviewed. Thereby, I will focus on novel physical-layer solutions to THzCom systems and present novel communication mechanisms, such as hybrid beamforming, and new performance analytical frameworks, such as system-level reliability analysis. To conclude, I will identify and discuss some research challenges and open problems in THzCom.

(Slides of the talk)

Federated Learning and Beyond for 5G and Beyond

Date and Time: 26 Mar. 2021, 2:00 PM (AEDT).

Speaker: Dr Jihong Park (Deakin University)

Abstract: Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond. By imbuing intelligence into the network edge, edge nodes can proactively carry out decision-making, and thereby react to local environmental changes and disturbances while experiencing zero communication latency. To achieve this goal, it is essential to cater for high ML inference accuracy at scale under time-varying data distributions, by continuously exchanging ML model updates in a distributed way while preserving local data privacy. Taming this new kind of data traffic boils down to improving the communication efficiency of distributed learning by co-designing communication and ML operations. To this end, this talk aims to provide an overview of communication-efficient and distributed ML frameworks such as federated learning, federated distillation, and split learning with selected use cases. (YouTube)

Computational neuroscience: Brains, networks, models and inference

Date and Time: 12 Mar. 2021, 2:00 PM (AEDT).

Speaker: Assoc/Prof. Adeel Razi (Monash University)

Abstract: In this talk, I will discuss the multidisciplinary area of `computational neuroscience'. There are several parallels and interesting links between neuroscience, artificial intelligence and network engineering which I will highlight during the talk [1]. The key concepts being: networks, models, inference and information processing. This talk will have two parts. In the first part, I will describe the basics of neuroimaging i.e., how to measure brain signals which can then be used to challenge computational models of brain function. Then, I will provide one concrete example of (generative) modelling of brain networks using engineering and probabilistic tools [2, dynamic causal modelling] which include state-space models, spectral time-series analysis, and Bayesian (variational) inference. Then in the second part, I will introduce and discuss active inference (a corollary of the free energy principle [3,4]): an approach to understanding behaviour that rests upon the idea that the brain uses an internal generative model to predict incoming sensory data. Active inference can be summarised as self-evidencing [5], in the sense that action and perception can be cast as maximising Bayesian model evidence, under generative models of the world.

[1] D. Hassabis, D. Kumaran, C. Summerfield, M. Botvinick, (2017) ``Neuroscience-inspired artificial intelligence”, Neuron.

[2] A. Razi, and K. J. Friston (2016), ``The connected brain: Causality, models and intrinsic dynamics”, IEEE Signal Processing Magazine. vol. 33, no. 3, pp. 14-35.

[3] Free energy principle: https://en.wikipedia.org/wiki/Free_energy_principle

[4] M.J.D. Ramstead, P.B. Badcock, K.J. Friston, (2018) ``Answering Schrödinger's question: a free-energy formulation", Phys. Life Rev. 24, 1–16. (doi:10.1016/j.plrev.2017.09.001).

[5] J. Hohwy, (2016) ``The self-evidencing brain", Noûs 50:259-285. https://doi.org/10.1111/nous.12062


(Slides of the talk)


Information, Games, and Machine Learning for Cybersecurity

Date and Time: 4 Dec. 2020, 2:00 PM (AEST).

Speaker: Prof. Tansu Alpcan (The University of Melbourne)

Abstract: The prevalence of networked cyberphysical systems increases the importance of cybersecurity as a research topic. Addressing complex cybersecurity problems requires a multi-disciplinary approach. Information, game, and optimisation theories combined with deep learning constitute a powerful set of methods for this purpose. This talk will first give an overview of game theory, which provides a solid mathematical foundation for analysing security games, where the interaction between independent decision makers (players) are adversarial. Then, an information-theoretic approach to deep learning problems will be presented within the context of conceptual dictionary learning. Finally, an encoding-based distributed anomaly detection framework will be discussed for software-defined wireless networks.

A Revisit to Convolutional Codes: Towards Linear Complexity MAP Decoding

Date and Time: 20 Nov. 2020, 2:00 PM (AEST).

Speaker: Prof. Yonghui Li (University of Sydney)

Abstract: Convolutional codes have been widely used in various modern communications systems. Its popularity stems from its simple encoder structure, which can be implemented by shift registers. The main complexity associated with systems using convolutional coding is situated in the decoder. The complexity of optimal maximum a posteriori probability (MAP) of convolutional codes increases exponentially with the code constraint length.

In this talk, I will revisit the decoding process of convolutional codes. In my preliminary research, we discover some interesting explicit relationships between encoding and decoding of rate-1 binary and non-binary convolutional codes. We observe that the forward and backward BCJR SISO MAP decoders can be simply represented by their dual SISO channel encoders using shift registers in the complex number field. The bidirectional MAP decoding can be implemented by linearly combining the shift register contents of the dual SISO encoders of the respective forward and backward decoders. The discovered new decoder structure can reduce the exponential complexity of MAP decoding to linear one. The dual encoder structures for various recursive and non-recursive rate-1 binary and non-binary convolutional codes are derived.

Grant-free Non-Orthogonal Multiple Access (NOMA) for Massive IoT: Concept, Receiver Design, Challenges, and Future Directions

Date and Time: 6 Nov. 2020, 2:00 PM (AEST).

Speaker: Dr. Basit Shahab (University of Newcastle)

Abstract: The grant-based random access procedures applied in existing wireless communication systems, primarily designed for human-type communications (HTC), are notorious for their huge signalling overhead and latency issues. While improvements in grant-based access procedures are ongoing, these might not be enough to cater the massive, diverse, and sporadic traffic of the IoT networks. To this end, grant-free access has gained significant research interest, where devices can transmit their data in an arrive-and-go manner without any prior grant access procedure. However, as the grant-free access is still contention-based, and considering the existing orthogonal multiple access (OMA) schemes, such access is vulnerable to collisions when multiple devices choose the same resource block, and may become a major performance bottleneck in massive IoT networks. To tackle this, non-orthogonal multiple access (NOMA) based grant-free access has been identified as a promising solution. As NOMA allows multiple data streams to be transmitted over the same resource block using different multiple access signatures i.e., power levels, spreading sequences, or scrambling/interleaving patterns, it has the ability to significantly reduce the number of collisions and retransmissions. However, collision and multi-user data detection becomes complex in grant-free NOMA as the receiver has no prior knowledge of the number of devices that have transmitted over a particular resource block, their randomly chosen NOMA signature, and other transmission parameters.

In this context, this lecture gives a comprehensive overview of grant-free NOMA, the proposed schemes in literature, various receiver designs, ongoing works, open research/practical challenges, and possible future directions.

Machine Learning for Detection of Attacks on Cyber-Physical Systems

Date and Time: 23 Oct. 2020, 2:00 PM (AEST).

Speaker: Prof. TENG JOON (TJ) LIM (The University of Sydney)

Abstract: In cyber-physical systems powered by the Internet of Things, attackers may seek to disrupt communication networks in order to adversely impact the operations of cyber-physical systems that rely on them. In this talk, we will address two particular problems within that space: (i) the early detection of IoT botnets using a machine learning classifier, and (ii) the detection of malicious roadside units (RSUs) in a vehicular network through a trust-based system built on statistical detection theory. (Slides)

Pliable Communications

Date and Time: 9 Oct. 2020, 2:00 PM (AEST).

Speaker: Assoc/Prof. Lawrence Ong (University of Newcastle)

Abstract: Humans have been communicating by sending specific data. The rise of machine communications challenges this presumption and spawns opportunities for more efficient transmissions. This talk discusses applications with pliable communications, where receivers demand non-specific data, and presents recent results in this new communication paradigm.

Optimizing Information Freshness in Wireless IoT Networks: A Multi-User Scheduling Perspective

Date and Time: 25 Sep. 2020, 2:00 PM (AEST).

Speaker: Dr. He Chen (Chinese University of Hong Kong)

Abstract: More and more emerging Internet of Things (IoT) applications involve status updates, where various IoT devices monitor certain physical processes and report their latest statuses to the relevant information fusion nodes. In many time-critical IoT applications, the system performance is heavily determined by the freshness and timeliness of information exchanged through the network. The age of information (AoI) concept has recently been proposed as a means to quantify information freshness. Recent research on AoI suggests that well-known design principles of wireless networks based on traditional objectives, such as achieving high throughput, need to be re-examined if information freshness is the new target. In this context, how to schedule various time-sensitive traffic to minimize the network-wide AoI performance has become a fundamental and significant problem in the design of real-time wireless IoT networks. In this talk, I will first introduce the definition of AoI, and then elaborate on our recent efforts on designing AoI-oriented multi-user scheduling schemes. (Slides, YouTube, Bilibili)

Pathway to Spectrum Intelligent Radio

Date and Time: 11 Sep. 2020, 2:00 PM (AEST).

Speaker: Dr. Peng Cheng (La Trobe University)

Abstract: The advent of Industrial IoT with massive connectivity places significant strains on the current spectrum resources and challenges the industry and regulators to respond promptly with new disruptive spectrum management strategies. The envisioned spectrum intelligent radio has long been promised to unlock the full potential of spectrum resource. However, the current radio development, with certain elements of intelligence, is nowhere near showing an agile response to the complex radio environments. In this talk, following the line of intelligence, we classify spectrum intelligent radio into three streams: classical signal processing, machine learning (ML), and contextual adaptation. We focus on the contemporary ML approach, and introduce an intelligent radio architecture with three hierarchical forms: perception, understanding, and reasoning. For each form, we present some of our preliminary results, highlighting the great potential of roles that ML could play. Standardization, opportunities, challenges, and future visions are also discussed for the realization of a fully intelligent radio. Besides, we will further extend this concept into future intelligent wireless communications. (Slides)


Nonstochastic Information Theory and Worst-Case State Estimation

Date and Time: 28 August 2020, 2:00 PM (AEST).

Speaker: Prof. Girish Nair (The University of Melbourne)

Abstract: There is a close connection between achievable performance and communication capacity in state estimation and control problems. Under worst-case performance objectives over a point-to-point memoryless channel, it is known that the relevant metric of channel quality is its zero-error capacity, not its ordinary capacity. In this talk I’ll present some of the key ideas of nonstochastic information theory, and how they are relevant to problems of state-estimation via point-to-point, memoryless channels. I’ll then discuss some recent extensions to state estimation over multiple-access channels and channels with memory (joint work with Ghassen Zafzouf and Amir Saberi).

A novel signal waveform: OTFS modulation and its design for high mobility communications

Date and Time: 14 August 2020, 2:00 PM (AEST).

Speaker: Prof. Jinhong Yuan (University of New South Wales)

Abstract: Orthogonal time frequency space (OTFS) is a recently proposed two-dimensional modulation scheme by Cohere, which can provide reliable communications in high-mobility scenarios or over doubly-selective fading channels. One of its key features is to embrace the channel dynamics via modulating information in the delay-Doppler (DD) domain. The first part of this talk provides a brief overview of OTFS, highlighting its fundamental concept, underlying motivation, and specific features, including the DD domain channel properties, the DD domain multiplexing, and OTFS transceiver architecture. Then, critical challenges of OTFS, such as channel estimation, efficiency data detection, and coding/decoding problems, were discussed and our preliminary results are provided. Potential applications of OTFS will also be discussed.


Asymptotic Optimality in Byzantine Distributed Quickest Change Detection

Date and Time: 31 July 2020, 2:00 PM (AEST).

Speaker: Assoc/Prof. Yu-Chih (Jerry) Huang (National Chiao Tung University)

Abstract: Byzantine distributed quickest change detection (BDQCD) is studied, where a fusion center monitors the occurrence of an abrupt event through a bunch of distributed sensors that are subject to compromise. The current state-of-the-art by Fellouris et al. [IEEE Trans. Inf. Theory 2018] exhibits excellent tradeoff between the mean time to a false alarm and the expected detection delay. However, tight converse bounds are lacking; therefore, there is very little we can say about the (asymptotic) optimality in BDQCD. In this work, we prove a novel converse to the first-order asymptotic detection delay in the large mean time to a false alarm regime. This converse is tight in that it coincides asymptotically with the currently best achievability shown by Fellouris et al.; hence, the optimal asymptotic performance of binary BDQCD is characterized. An important implication of this result is that, even with compromised sensors, a 1-bit link between each sensor to the fusion center suffices to achieve the asymptotic optimality.

This talk is based on joint work with Yu-Jui Huang @University of Colorado at Boulder and Shih-Chun Lin @National Taiwan University of Science and Technology. (YouTube)

Optimizing the Transition Waste in Coded Elastic Computing

Date and Time: 17 July 2020, 2:00 pm -2:30 PM (AEST).

Speaker: Dr. Hoang Dau (RMIT University)

Abstract: Coded distributed computing, built upon algorithmic fault tolerance, is a recently emerging paradigm where computation redundancy is employed to tackle the straggler effect - one slow machine may become a bottleneck of the whole system. As a toy example, to perform a matrix-vector multiplication Ax, a master machine first partitions the matrix A into two equal-sized submatrices A1 and A2 and then distributes A1, A2, and A1 + A2 to three worker machines, respectively. These machines also receive the vector x and perform three multiplications A1x, A2x, and (A1 + A2)x in parallel. Clearly, Ax can be recovered by the master from the outcomes of any two workers. Thus, this coded scheme can tolerate one straggler. Coded computing has been shown to work for linear as well as for any nonlinear function that can be represented by a deep network.

Most of the research in the literature of coded distributed computing, however, assumes that the set of available worker machines remains fixed. In this talk, we discuss the case when the number of machines may vary during the computation. The key question to ask is how to reassign computation tasks to remaining machines so as to minimize the transitional waste, a new concept introduced in our work quantifying the total number of tasks that existing machines have to abandon or take on anew when a machine joins or leaves. This line of research was first studied by Yang et al. (ISIT'19), motivated by recently available services in the cloud computing industry, e.g., EC2 Spot or Azure Batch, where spare/low-priority virtual machines are offered at a fraction of the price of the on-demand instances but can be preempted on short notice.

Deep Learning-based Techniques for Grant-Free Non-Orthogonal Multiple Access for B5G-IoT

Date and Time: 17 July 2020, 2:30 pm -3:00 PM (AEST).

Speakers: Dr. Rana Abbas (The University of Sydney) and Dr. Tao Huang (James Cook University)

Abstract: Medium Access Protocols 5G involves a broader spectrum (up to 60 GHz) and will allow for non-orthogonal techniques and most likely will be OFDMA- based. Nonetheless, there is a large consensus in the literature that orthogonal multiple access approach will not suffice to support the massive access of devices. That is because grant-based communications in cellular networks suffer from collision rates over the random access channel that is as high as 10%, for less than ten active users. Moreover, the signalling overhead involved in establishing a link is about 30- 50% of the payload size, for messages less than 200 bits long. In terms of latency, the grant-based access procedure in LTE-A takes around 5-8 ms in the best-case scenario. Thus, grant-based access fails at meeting many KPIs when massive connectivity is required for short packet transmissions. Because of this, the light-weight random access protocols have been heavily re-investigated over the past few years. The throughput has been improved in orders of magnitude with sophisticated yet still low- complex transceiver algorithms. In this presentation, we present novel and promising results for deep learning (DL)-based techniques for joint user detection and decoding in grant-free non-orthogonal multiple access (GF-NOMA). Our results show that DL-based methods can have reliability close to that of maximum-likelihood, at a much-reduced complexity. We also present open research problems in this space, such optimising loss functions, formulating multi-objective problems and obstacles in implementation.

Deep Learning for Ultra-Reliable and Low-Latency Communications: Motivations, Case Studies and Future Directions

Date and Time: 3 July 2020, 2:00 PM (AEST).

Speaker: Dr. Changyang She (The University of Sydney)

Abstract: The beyond 5G (B5G) networks are expected to support diverse applications in vertical industries, such as the industrial IoT, vehicular networks, mining automation and remote healthcare. To meet the stringent quality-of-service (QoS) requirements of these emerging services, ultra-reliable and low-latency communication (URLLC) has been considered as one of the most challenging scenarios in B5G networks. Driven by recent breakthroughs in the area of deep learning, combining deep learning algorithms with theoretical principles of wireless communications has been considered as a promising way of developing enabling technologies for B5G networks. In this talk, I will illustrate the motivations of using deep learning for wireless networks. Specifically, I will take a power control problem as an example, where the optimal solution of the problem is the well-known “water-filling” policy. Then, I will introduce how to develop learning-based solutions for URLLC in some case studies. The basic idea is to use deep neural networks as functional approximators. Finally, I will discuss some future directions in deep learning for URLLC. (YouTube, Bilibili)

Molecular Communication: Connecting Nano/Micro-Things

Date and Time: 19 June 2020, 2:00 PM (AEST).

Speaker: Dr. Yuting Fang (University of Melbourne)

Abstract: A super-connected world is unparalleled, where nanoscale devices will be interconnected in anywhere and at any time. Looking 10–20 years ahead, the Internet of Nano/Micro-Things will be enabling numerous exciting applications, such as intra-body health monitoring and drug delivery. Molecular communication has emerged as a promising method to enable the Internet of Nano/Micro-Things since molecular communication is bio-compatible and consumes low energy. This talk will first introduce molecular communication by addressing "what is molecular communication", "Why we study molecular communication", "How we study molecular communication". She will then present her work being done in molecular communication, especially the work to build up the fundamental understanding of the noisy molecular signalling among bacteria. Finally, she will also briefly present her going-on work including channel characterization, dynamic behaviour analysis of microorganisms, and possible interdisciplinary collaboration. (Video)


Covert Wireless Communications: Opportunities and Challenges

Date and Time: 5 June 2020, 2:00 pm -2:30 PM (AEST).

Speaker: Dr. Shihao Yan (Macquarie University)

Abstract: Covert wireless communication has recently emerged as a new transmission technology to address privacy and security in wireless networks. In this talk, the key features of covert communication and various important design considerations will be clarified. Firstly, the differences between covert communication and the well-known physical-layer security are presented. Then, the optimal signalling strategies for transmitting the message-carrying signal and artificial-noise signal for covert communication will be discussed. Finally, some key challenges in the design of practical covert communication systems will be identified and future research directions in this context will be presented. (YouTube, Bilibili)

A New Receiver Design for Wireless Systems

Date and Time: 5 June 2020, 2:30 pm -3:00 PM (AEST).

Speaker: Assoc/Prof. Xiangyun (Sean) Zhou (The Australian National University)

Abstract: This talk brings us back to an old and fundamental problem, i.e., how a communication receiver processes its received signal for information detection. We will take a look at a new receiver design that processes the received signal using joint coherent and non-coherent detection. What may surprise us is the somewhat unexpected performance advantage of the new receiver design over the state of the art. The content of the talk is very easy to follow for anyone working in the general field of communications. There will certainly be fruitful discussions because you will likely to have many questions (and criticisms) on the new receiver design. (YouTube, Bilibili)

What can we learn by listening to WiFi signals through walls?

Date and Time: 22 May 2020, 2:00 PM (AEST).

Speaker: Prof. Iain Collings (Macquarie University)


5G IoT networks

Date and Time: 8 May 2020, 2:00 PM (AEST).

Speaker: Prof. Branka Vucetic (The University of Sydney)

Abstract: 5G networks with limited services, mainly enhanced mobile broadband (emBB), are being deployed worldwide. The deployment of 5G is a turning point in the evolution of cellular networks from personal communications to a general-purpose technology. 5G networks will make possible new classes of advanced IoT applications that will bring transformative automation benefits to industry and critical infrastructure. The talk will give an overview of the main 5G benefits and its new technologies. The stringent requirements of 5G for mission critical services, such as ultrahigh reliability and low latency, have made it the most challenging feature of 5G. The talk will give an overview of recent results on ultra-reliable low latency communications (URLLC) for 5G, and the design of a 5G SDN platform with URLLC algorithms and protocols. Examples of new IoT applications in industrial control and energy grids, enabled by 5G connectivity, will be discussed.


On privacy-preserving mechanisms for counting queries

Date and Time: 24 Apr. 2020, 2:00 PM (AEST).

Speaker: Prof. Parastoo Sadeghi (The Australian National University)

Abstract: Count queries from population-based data records aim to determine how many people have certain attributes in a population. In order to protect the privacy of individuals in such records noise must be added to the true count before releasing it to the data analyst. However, this induces a fundamental non-trivial tradeoff between data privacy and accuracy. In 2020, US Census will be adopting a well-known privacy-preserving framework, namely the differential privacy (DP) framework, for releasing census data to the public. Other countries, such as Australia and New Zealand, are also closely examining the applicability of DP for their future census.

In this talk, we examine pros and cons of some of the existing DP-based methods for count queries and propose a new method to deal with error probability, deviation from the true count, error bias and DP in a single optimization framework. We present closed-form results on the tradeoff between DP parameters and error parameters and analyse the findings for their potential use in the future.