תשפ"ה
Title: From Standardization to Theory and Back
Abstract:
Graph databases are becoming increasingly popular due to their natural data modeling, making them useful in expressing connections that are harder to express in the relational model. Indeed, graph databases are used in a plethora of domains ranging from social to biological networks, and for various use-cases including fraud detection and investigating journalism. Since 2019, GQL (Graph Query Language) is being developed under the auspices of ISO as the new standard for querying graph databases, akin to SQL for relational databases. In this talk, I will present a researcher’s digest of GQL by describing its underlying theoretical model. I will demonstrate how we can use tools from formal language and automata theory to show the limitations of this new standard, which can hint at extensions for its next versions.
This talk is based on recent papers with Nadime Francis, Amélie Gheerbrant, Paolo Guagliardo, Leonid Libkin, Victor Marsault , Wim Martens, Filip Murlak, Alexandra Rogova, and Domagoj Vrgoç.
Title: From Cobol to Copilot
Abstract:
What can the history of programming languages teach us about paradigm shifts?
The Generative AI revolution is coming to Software Engineering. How different will it be from past revolutions in this field? This talk reviews the history of programming languages, from the first compilers to recent code generators. From the great changes that once shook the world of software development, we will try to predict the impact of the coming wave. Listen carefully during the presentation, and count: How many times have the phrase "We will not need to code any more, just to write in plain English!" been said before?
Title: Query Refinement for Diversity Constraint Satisfaction
Abstract: Database queries are commonly used to select and rank items for decision support across various domains. As automated decision-making tools become more prevalent, there is an increasing emphasis on diversity and group representation in their outcomes, which in turn require constraints on the sizes of different subgroups in the result set. Traditional relational queries only specify conditions in their predicates and do not accommodate such output restrictions. In this talk, I will present the problem of query refinement for constraint satisfaction-- modifying queries to ensure that the results meet size constraints for multiple subgroups. Although this problem cannot be solved in polynomial time in the worst case, we leverage provenance annotation techniques to develop efficient query refinement methods for practical use.
Bio: Yuval is an Assistant Professor at the Department of Computer Science at Ben-Gurion University. She obtained her Ph.D. in Computer Science from Tel Aviv University and was a postdoctoral researcher at the University of Michigan. Her research is centered around data management, advanced database applications, provenance, and process analysis. Her current work focuses on data management for fairness and responsible data science.
Title: Locally-iterative coloring in CONGEST networks
Abstract:
Locally-iterative algorithms for distributed coloring are algorithms that work as follows. Computation proceeds in parallel by the vertices of the network graph, in synchronous rounds. Initially, the network graph is colored in a trivial way using n colors, where n is the number of vertices in the graph. In each round each vertex sends its color to its neighbors, receives neighbors' colors, and computes a new color based on current colors in its neighborhood. In CONGEST networks the message size is bounded by O(log n), and indeed this is sufficient to represent a color of a vertex. After a certain number of rounds of a locally-iterative algorithm, the coloring of the entire graph becomes a (Delta + 1)-coloring, where Delta is the maximum degree of the graph. Then the locally-iterative algorithm reaches its goal.
A heuristic lower bound of Szegedy and Vishwanathan from 1993 states that Omega(Delta log Delta + log-star n) rounds are required for a locally-iterative (Delta+1)-coloring algorithm, unless there exists a very special type of coloring that can be very efficiently reduced. In this talk we provide such a (Delta + 1)-coloring algorithm, with complexity of O(Delta + log-star n) rounds, which is below the heuristic lower bound. In addition, we consider more general types of coloring, specifically, k-distance coloring. In such a coloring, any pair of vertices at distance at most k one from another must obtain distinct colors. This is challenging in the CONGEST setting, since each vertex can obtain only a small portion of its k-hop-neighborhood in a round. Thus previous results required high round complexity. We provide a general technique for speeding-up k-distance algorithms that can be expressed using certain types of operations. This results in a significant speed-up, which is sometimes quadratic, and for specific cases is exponential, such as 2-distance O(Delta^4)-coloring.
The talk is based on a joint work with Elkin and Goldenberg from JACM'21, and a joint work with Goldenberg from DISC'24.
Title: A Global View of Locality through the lens of Distributed computing, Parallel computing, and Learning theory
Abstract:
Locality, i.e. computing under partial knowledge, is a fundamental challenge that can manifest itself in many versatile ways: from a distributed network where each processor has to rely on local information to solve global tasks (such as routing), to a sublinear algorithm that only has access to a small fraction of the input before outputting some estimate or approximate solution. In this talk, I will highlight the many faces of this challenge in the distributed, parallel, and learning settings.
- Distributed settings: CONGEST is a distributed model which aims to capture both locality and communication limitations. I will present several results for this model, which highlight the interplay of the two limitations, and in particular the subgraph freeness problem - a very local problem that requires a surprisingly global solution. Additionally, I will discuss how to overcome challenges arising from the inclusion of other considerations, such as security, and resilience to crashes and corruptions.
- Parallel settings: The widely adopted Map-Reduce Framework revolutionized parallel computation in the industry, and has been the focus of much theoretical research. The Massively-parallel computation model (MPC) is one such avenue to explore this framework. I will discuss several results in the MPC model, and in particular an interesting new research direction, in which we show that even a single machine with a slightly more global view is sufficient to overcome many of the known lower bounds for this setting. At the heart of our techniques are sampling lemmas designed to connect small but informative parts of the input (e.g. random subgraphs if the input is a graph) to global properties of the input.
- Learning Theory: In the PAC learning setting, we would like to be able to answer queries about a set of data elements we have no prior knowledge of, other than some given training examples randomly taken from a distribution. We are hence required to extrapolate from the local (our training set), to the global (any new sample). We study comparative-based queries - queries which compare the similarity of different pairs of elements. In the course of this line of works, we obtained new insights into the k-nearest neighbor problem, surprisingly through techniques developed for distributed computing.
Bio:
Orr Fischer conducted his PhD studies at Tel-Aviv University under the guidance of Prof. Rotem Oshman, focusing on distributed computing, sublinear algorithms, and communication complexity (2017-2022). Later, he joined as a postdoctoral fellow at Weizmann Institute of Science at Prof. Merav Parter lab, and researched distributed computing under byzantine settings, as well as PAC learning settings (2022-2024). Today (2024-), he is a postdoctoral fellow at the Bar-Ilan University, hosted by Dr. Talya Eden, Dr. Arnold Filtser, and Prof. Ester Ezra.
Title: Segments in Robot Configuration Spaces
Abstract:
Sampling-based motion planning (SBMP) is an algorithmic approach to motion planning which uses random sampling in order to mitigate the problem’s complexity. SBMP algorithms also reduce the problem to finding a path in a geometric graph (roadmap) residing in an abstract space called the robot configuration space (C-space). The C-space is the set of all possible configurations of the robot, vectors of spatial coordinates and joint angles, in the input environment. It is not necessarily Euclidean, can be very high dimensional, and is very non-intuitive. In my talk I will discuss C-space line segments, one of the building blocks of C-space geometric graphs. I will present specialized computational geometry algorithms adjusted for segments in high dimensional non-Euclidean spaces, and some workarounds that enable fast updates for dynamic roadmaps.
Bio: Stav is a PhD candidate at the Siebel School of Computing and Data Science at the University of Illinois Urbana Champaign. He is co-advised by Prof. Sariel Har-Peled and Prof. Nancy Amato, and his research is in computational geometry for robot motion planning.
Title: Data Tools for Accelerated Scientific Discoveries
Abstract:
Like general data scientists, scientists conducting empirical research rely on data and analyze it to extract meaningful insights. Yet, scientists have two qualities that distinguish them from general data scientists: (1) they rely on extensive scientific domain knowledge to make scientific discoveries, and (2) they aim to explain and understand, not simply predict the real world. These two qualities must be reflected in their data analysis tools to assist them and accelerate the process of making real-world discoveries. In this talk, I will present data tools for accelerated scientific discoveries. In particular, I will present tools that assist scientists in investigating their data using scientific knowledge and helping scientists acquire missing data and domain-specific knowledge required to understand real-world mechanisms and draw trustworthy conclusions.
Bio: Brit is an assistant professor at the computer science faculty at the Technion. Before that she was postdoc researcher at CSAIL MIT, working with Prof. Michael Cafarella. She received her Ph.D. at Tel-Aviv University under the supervision of Prof. Tova Milo. Her research is centered around informative and responsible data science and causal analysis.
Title: The Random Linear Code: The Mother Code
Abstract: Error-correcting codes are essential in nearly all forms of digital communication, enabling reliable transmission over imperfect channels. The random linear code (RLC) ensemble is a cornerstone of coding theory, known for its excellent combinatorial properties. However, RLCs lack structure, which often limits their algorithmic feasibility. In this talk, we will explore a line of research that uses a reduction paradigm to show how other, more structured codes can inherit the combinatorial advantages of RLCs, effectively merging the best of both worlds. We’ll also delve into intriguing connections between RLCs and other random combinatorial structures, such as random graphs.
Bio: Jonathan Mosheiff is a Senior Lecturer in the Department of Computer Science at Ben-Gurion University, specializing in theoretical computer science with a focus on coding theory. He completed his PhD at the Hebrew University, followed by postdoctoral positions at the Weizmann Institute and Carnegie Mellon University.
Title:
Advancing Crime and Health Solutions with Hebrew Natural Language Processing
Abstract:
In this lecture, I will discuss how Hebrew natural language processing (NLP) applications are enhancing decision-making in law enforcement and healthcare. We will review recent NLP advancements that address the unique challenges of Hebrew language processing. First, I will introduce a method for automatically linking crimes by extracting behavioral patterns from Hebrew police reports, significantly reducing the manual workload for investigators. Next, I will present an explainable recommender system for surgical procedures, which integrates clinical text and demographic data to offer personalized surgery recommendations, emphasizing transparency in decision-making. These innovations underscore Hebrew NLP's potential to tackle linguistic complexities and foster progress in public safety and personalized healthcare.
תשפ"ד
Title: Primal-dual approximation algorithms for network design problems
Abstract:
In network design problems we are given a graph with edge costs and seek a cheap subgraph
that satisfies certain requirements. Well known examples are Shortest Path, Minimum
Spanning Tree, Steiner Tree, Steiner Forest, and many others.
Most of these problems are NP-hard, and thus approximation algorithms are of interest. In this
talk I will survey the history and recent developments of one of the most successful methods
for handling such problems -- the primal-dual method.
Specifically, 30 years ago, Williamson, Goemans, Mihail, and Vazirani [STOC 1993] gave a
generic primal-dual 2-approximation algorithm that applies to a certain class of “uncrossable
problems” and asked whether it applies to a larger class of problems. I will describe two recent
results that answer this open question.
כותרת: שימוש בכלי תכנות תחרותי בהוראת מדעי המחשב
תקציר: עופר ולד - מסטרנט לתואר שני באוניברסיטה הפתוחה, מאמן של זוכי אולימפיאדות תכנות ומרכז הוראה בסדנה לתכנות תחרותי באו"פ יציג בקצרה את עבודת התזה שלו. במסגרת ההרצאה אסקור בקצרה את עולם התכנות התחרותי, אציג תחרויות בינלאומיות חשובות ואת ההבדל בין "תכנות תחרותי" ל"תחרות הדורשת תכנות". אראה את העבודה שנעשתה בבניית אתר המרכז שאלות בתכנות תחרותי ככלי אימון ולימוד לתלמידים. ולבסוף אציג ניתוח מקרה של השימוש באתר עבור קורס ספציפי של מבוא למדעי המחשב וההשפעה שלו על הלמידה בקורס
כותרת: הסברים למודלי ויז'ן בעזרת אינטגרציה עמוקה
תקציר:
כיום, מודלים בעולם הראייה הממוחשבת מצליחים לפתור משימות שונות בצורה טובה והשימוש בהם נפוץ מאוד. למרות הישגים מרשימים אלו, היכולת להבין את חיזוי מודלים אלה היא דבר מאתגר. כתוצאה מכך, תחום בשם Explainable AI , העוסק בדיוק ביכולת זו, החל להתפתח והביא לעולם שיטות שונות אשר מאפשרות הבנה של חיזוי מודלים.
בהרצאה זו נבין מה משמעות היכולת להסביר מודל ראייה ממוחשבת ויוצגו מספר שיטות גנריות להסברת מודלים לראייה ממוחשבת. שיטות אלו יוצרות מפות הסברה בעזרת אינטגרציה על המידע מייצוגי השכבות השונות ברשת בשילוב עם הגרדיאנט שלהם. בוצעו ניסויים רחבים בהתאם למדדים שונים המקובלים בתחום אשר מעידים על אפקטיביות השיטות והיכולת שלהן לספק תוצאות טובות יותר משאר השיטות הנחשבות למובילות בתחום.
ביו:
יהונתן אלישע, עם תואר ראשון בהנדסת תוכנה מאוניברסיטת בן גוריון, כיום ראש צוות מחקר בתחום הראייה הממוחשבת ומסיים עכשיו את התואר השני באוניברסיטה הפתוחה במדעי המחשב.
Title: Context Dependent: Developing Multimodal Neural Architectures for Social Grounding of Language Models
Abstract:
State-of-the-Art Natural Language Processing (NLP) systems are trained on massive collections of data. Traditionally, NLP models are uni-modal: one form of data, e.g., textual data, is used for training. However, recent trends focus on multimodality, utilizing multiple forms of data in order to improve the system’s performance on classic tasks as well as broadening the capabilities of AI systems. Image and code are the two common modalities that are used in training popular tools such as OpenAI’s DALL-E text-to-image generator, GitHub’s Copilot programming assistant, and Google's Gemini. Language, however, is not merely a collection of stand-alone texts, nor texts merely grounded in image or aligned with code. Language is primarily used for communication between speakers in some social settings. The meaning (semantic, pragmatic) of a specific utterance is best understood by interlocutors that share some common ground and are aware of the context in which the communication takes place.
We posit that the structure of a social network is the result of social dynamics that implicitly encode the important social factors that shape communication and drive meaning and perception. Grounding texts in the network structure can mitigate (many of) the limitations of state-of-the-art LLMs. In this talk I will discuss the use of the social network for communication grounding and consider multimodal frameworks, combining text and social contexts.
I will demonstrate the general framework through two social granularities - global and local, and over a number of tasks from the detection of hate speech to the parsing of contentious discourse.
Bio:
Dr. Oren Tsur is an Assistant Professor (Senior Lecturer) at the Department of Software and Information Systems Engineering at Ben Gurion University in Israel where he heads the NLP and Social dynamics Lab (NASLAB), and serve as the director of the newly founded Interdisciplinary Center for the Study of Digital Politics and Strategy (DPS@BGU).
Oren’s work combines Machine Learning, Natural Language Processing (NLP), Social Dynamics, and Complex Networks. Specifically, Oren’s work varies from sentiment analysis to modeling speakers’ language preferences, hates-speech detection, community dynamics, and adversarial influence campaigns.
Oren serves as an editor and Senior Program Committee member in venues like ACL, EMNLP, WSDM and ICWSM and as a reviewer for journals ranging from TACL to PNAS and Nature.
Oren’s work was published in top NLP and Web Science venues.
His work/s on sarcasm detection was listed in the “top 50 inventions of the year” in Time Magazine’s Special technology issue.
Academic homepage: https://www.naslab.ise.bgu.ac.il/orentsur
Title: AI for animal welfare and wellbeing
Abstract:
In this talk I will survey some projects of the Tech4Animals Lab, which develops AI solutions for analyzing animal behavior and promoting their health, wellbeing and welfare. We will discuss how AI can be used for pain and emotion recognition from facial expressions in various species, welfare monitoring of sheltered dogs, behavioral testing of dogs and more.
Title: Leveraging Mathematics and Computing for Biologically-inspired Synthetic DNA Generation
Abstract:
Following the full-sequence human genomic project in 2001, clinical research increasingly depends on genomics data. Currently, genomic data is scarce due to multiple financial, regulation, and technical challenges. As such, synthetic genomics that mimics realistic genomics is an intermediate solution for professionals looking to use genomic data to develop new methods. Existing in silico genomics sequence generators grapple with limitations in capturing intricate biological relations, often relying on shallow connections. I will present a unique approach, seamlessly integrating genetic algorithm and deep learning (DL) techniques. By mimicking genetic processes, the proposed approach generates synthetic genomes indistinguishable from their real counterparts. During the development of the proposed method, a complex encoding of the genomic data is required. The current golden model for such tasks is the AutoEncoder (AE) architecture DL model. However, AE architecture poses challenges in influencing the embedding space, which can conflict with important biological properties. Thus, I will also present KiAE - a knowledge-integrated AE model, enabling one to incorporate external knowledge sources effectively to preserve desired properties in the embedding space.
Bio:
Teddy received his BcS and MS in mathematics from Bar Ilan University, in 2016 and 2018, respectively. In addition, Teddy received his PhD degree in mathematics from Ariel University, in 2021. During 2021-2023 he was a post-doctoral fellow at University College London, UK. He is interested in data-driven algorithms for clinical and biological datasets and social-economic policy design for healthcare management. In particular, he specializes in multi-objective optimization problems and data-driven model development. His framework involves mathematical and computer science methods, utilizing bio-mathematical modeling (mainly differential equations), machine learning, and optimization algorithms.
Speaker: Dan Halperin, Tel-Aviv University.
Title: From snapping fixtures to multi-robot coordination: Geometry at the service of robotics
Abstract:
Robots sense, move and act in the physical world. It is therefore natural that understanding the geometry of the problem at hand is often key to devising an effective robotic solution. I will review several problems in robotics and automation in whose solution geometry plays a major role. These include designing optimized 3D printable fixtures, object rearrangement
by robot arm manipulators, and efficient coordination of the motion of large teams of robots. As we shall see, exploiting geometric structure can, among other benefits, lead to reducing the dimensionality of the underlying search space and in turn to efficient solutions.
Title: Algorithms for Approximation of Random Variables
Abstract:
Discrete random variables are essential ingredients in various artificial intelligence problems. These include the estimation of the probability of missing the deadline in a series-parallel schedule and the assignment of suppliers to tasks in a project in a manner that maximizes the probability of meeting the overall project deadline. The solving of such problems involves repetitive operations, such as summation, over random variables. However, these computations are NP-hard. Therefore, in our research, we explore techniques and methods for approximating random variables with a given support size and minimal Kolmogorov distance. We examine both the general problem of approximating a random variable and a one-sided version in which over-approximation is allowed but not under-approximation. We propose several algorithms and evaluate their performance through computational complexity analysis and empirical evaluation. All the presented algorithms are optimal in the sense that given an input random variable and a requested support size, they return a new approximated random variable with the requested support size and minimal Kolmogorov distance from the input random variable. Our approximation algorithms offer useful estimations of probabilities in situations where exact computations are not feasible due to NP-hardness complexity.
Bio:
Liat Cohen is a faculty member at Ariel University who specializes in Artificial Intelligence, focusing on decision-making under uncertainty, planning, and machine learning. She obtained her Ph.D. from the Ben-Gurion University of the Negev and was hosted by Prof. Malte Helmert at Basel University for postdoc studies. Her recent work involves algorithms for approximating random variables and integrating machine learning into classical planning domains. Her research revolves around merging theoretical foundations with practical AI applications, aiming to develop sustainable algorithms for real-world problem-solving.
Title: Verification of Complex Hyperproperties
Abstract:
In formal verification, we develop algorithms to formally prove that a system adheres to its specification. Classically, specifications are logical formulas describing the requirements of the system executions.
Hyperproperties are system properties that relate multiple executions of the system and cover a wide range of requirements, from information-flow security to epistemic properties, such as knowledge of agents in multi-agent systems.
In this talk, we introduce the notion of hyperproperties and their various applications. We then present a new logic — Hyper^2LTL — that can express complex hyperproperties such as common knowledge in multi-agent systems. We also provide a novel (sound but necessarily incomplete) verification algorithm for Hyper^2LTL specifications.
HyperLTL is the prominent logic for specifying hyperproperties. HyperLTL reasons about the temporal behavior of programs, and, in addition, uses first-order quantification to relate multiple execution traces. This way, HyperLTL can express information-flow security properties such as noninterference — the value of low-security output can only depend on low-security inputs — by stating "for every two execution traces, if they agree on low-security inputs, then they agree on low-security outputs". While HyperLTL can express a wide variety of hyperproperties, it falls short of expressing more complex properties, such as common knowledge in multi-agent systems and asynchronous hyperproperties. To this end, we introduce Hyper^2LTL, which extends the first-order quantification of HyperLTL to a second-order quantification over sets of traces. Hyper^2LTL is a powerful formalism that allows us to express a wide range of complex hyperproperties. Yet, its expressiveness comes with a price — verifying the correctness of a system with respect to a Hyper^2LTL specification is undecidable.
To maintain expressiveness and still verify complex hyperproperties, we present a partial algorithm providing definite answers in many useful cases for which no verification methods existed. Our experimental results demonstrate the effectiveness of the suggested algorithm.
Bio:
Hadar Frenkel is a postdoctoral researcher at CISPA Helmholtz Center for Information Security in Germany, in the Reactive Systems Group. She obtained her PhD from the Technion, Israel Institute of Technology, in 2021. Hadar studies complex hyperproperties, such as knowledge, causality, and asynchronous hyperproperties, and searches for logical formalisms and verification algorithms for them. She also studies automata learning and its applications in program verification and repair.
Title: The Guide Robot - Developing a Human-aware Artificial Collaborator
Abstract: In this talk I will present the guide robot grand challenge and discuss the components required to design and build a service robot that can replace or surpass the functionalities of a guide dog. This challenge encompasses a variety of research problems that can benefit from human-inspired AI: reasoning about other agents, human-robot interactions, explainability, teaching teammates and learning from them, and more. For each of these problems, I will present an example novel contribution that leverages the bilateral investigation of human and artificial intelligence. Finally, I will discuss the many remaining challenges to achieving a seeing-eye robot and how I plan to tackle these challenges.
Short Bio: Reuth Mirsky is a Senior Lecturer (Assistant Professor) at the Computer Science Department at Bar Ilan University and is the head of the Goal-optimization using Learning and Decision-making (GOLD) lab. She received her PhD in 2019 from Ben Gurion University and was a postdoc at the University of Texas until 2022. In her research, Reuth is interested in the similarities and differences between AI and natural intelligence and how these can be used to extend AI. Reuth is an active AI and HRI research community member and was selected as one of the 2020 Electrical Engineering and Computer Science (EECS) Rising Stars.
https://sites.google.com/site/dekelreuth/
Title: Forcing machine learning systems back to reality
Abstract:
From classical machine learning systems such as SVMs and Random Forests to modern and complex Neural Networks - machine learning systems are great at finding correlations in data. Finding correlations, such as the correlation between an image and a label describing its contents, enables a wide range of advanced AI applications from self-driving cars to Large Language Models such as ChatGPT. But as the often-ignored adage goes: "correlation does not imply causation" and AI systems often fail catastrophically when inputs deviate slightly from the sample distribution of their training data-sets.
In this talk we will explore how we at ODDITY use physics- and fact-base constraints to create robust and explainable AI systems, how we leverage these frameworks to overcome data scarcity and/or data imbalance, and how we employ out-of-distribution detection methodologies to guide our AI decision process.
Bio:
Boaz holds a Ph.D. in computer science, and specializes in hyperspectral imaging and computer vision. After graduating from the BGU Interdisciplinary Computer Vision Lab, Boaz went on to co-found Voyage81 - a computer vision startup based on work done during his graduate studies under the supervision of Prof. Ohad Ben-Shahar. Voyage81 was eventually acquired by ODDITY where he currently serves as the Chief Vision Officer. ODDITY has since gone public on the NASDAQ stock exchange ($ODD) in what has been described as one of the largest and most successful IPOs of 2023.
Dr. Arad was recognized by the European Machine Vision Association (EMVA) with their "Young Professional Award" in 2017, and leads a biennial challenge on spectral recovery from RGB under the New Trends in Image Restoration and Enhancement (NTIRE) workshop at the Computer Vision and Pattern Recognition (CVPR) conference. This challenge has been considered the gold standard for performance in the field of hyperspectral recovery since 2018.
תשפ"ג
Title: General Multi-Sided Markets
Abstract:
In two-sided markets, Myerson and Satterthwaite’s impossibility theorem states that one can not maximize the gain-from-trade while also satisfying truthfulness, individual-rationality and no deficit. Attempts have been made to circumvent Myerson and Satterthwaite’s result by attaining approximately-maximum gain-from-trade: the double-sided auctions of McAfee (1992) is truthful and has no deficit — it is weakly-budget-balanced, and the one by Segal-Halevi et al. (2016) additionally has no surplus — it is strongly-budget- balanced. They consider two categories of agents — buyers and sellers, where each trade set is composed of a single buyer and a single seller.
The practical complexity of applications in areas such as supply chain requires one to look beyond two-sided markets. Common requirements are for: buyers trading with multiple sellers of different or identical items, buyers trading with sellers through transporters and mediators, and sellers trading with multiple buyers. We attempt to address these settings.
We generalize Segal-Halevi et al. (2016)’s strongly-budget-balanced double-sided auction setting in three different ways: First we present a multilateral market where each trade set is composed of any number of agent categories. Our generalization refines the notion of competition in multi-sided auctions by introducing the concepts of external competition and trade reduction. Secondly, we show an obviously-truthful implementation of our market using multiple ascending prices.
Last but not least we present an ascending-price mechanism for a multi-sided market with a variety of participants. Each deal in the market may consist of a combination of agents from separate categories, and different such combinations are simultaneously allowed. This flexibility lets multiple intersecting markets be resolved as a single global market. Our mechanism is obviously-truthful, strongly budget-balanced, individually rational, and attains almost the optimal gain-from-trade when the market for every allowed combination of categoriesis sufficiently large. We evaluate the performance of the suggested mechanisms with experiments on real stock market data and synthetically produced data.
This is joint work with Dr. Erel Segal-Halevi and Dvir Gilor.
Bio:
Prof. Rica Gonen is a researcher in the field of computational game theory and computational social choice, affiliated with the Department of Mathematics and Computer Science at the Open University. Before joining the Open University, she completed her PhD at the Hebrew University, a post-doctorate at Bell Labs, and held a research position at Yahoo Labs in the U.S. Prof. Gonen's research focuses on finding algorithmic solutions to questions that lie at the intersection of three knowledge domains: computer science theory, game theory, and microeconomics. Prof. Gonen has conducted groundbreaking research in areas of combinatorial auctions and markets, rational cryptography, and sponsored search auctions. The mechanisms she designed have been implemented in industry and have led not only to theoretical innovations but also to cost savings of over a hundred million dollars in the supply chain of a major American company. Prof. Gonen has received research grants from the European Unionand the Israeli Ministry of Science, and has also developed over dozen international patents in areas of budget-limited mechanisms, advertising markets, and temporal markets.
Title: Evolutionary signatures of genome instability
Abstract:
Accurate replication and careful DNA repair are essential to maintain the integrity and stability of genetic information, ensuring the proper functioning and survival of the organism. Non-precise DNA repair and replication introduce mutations that can have negative consequences, yet they also serve a critical function in propelling evolution forward by introducing valuable genetic variation. The accessibility of complete genome sequences has provided us with the capability to identify specific markers of inaccurate DNA replication and repair that have left lasting imprints in genomes across evolutionary timelines. I will present a bioinformatics pipeline that identifies markers associated with specific mechanisms and explores the implications of our findings.
Bio: Einat Hazkani-Covo is an Associate Professor of Biology at the Open University. Einat received her BSc, MSc and PhD from Tel Aviv University and did her postdoctoral training at the National Evolutionary Synthesis Center and at Duke University. Her research focuses on molecular evolution and evolutionary genetics.
Title: On Parameterized Analysis and the Disjoint Paths Problem
Abstract:
Parameterized Analysis leads both to a deeper understanding of intractability results and to practical solutions for many NP-hard problems. Informally speaking, Parameterized Analysis is a mathematical paradigm to answer the following fundamental question: What makes an NP-hard problem hard? Specifically, how do different parameters (being formal quantifications of structure) of an NP-hard problem relate to its inherent difficulty? Can we exploit these relations algorithmically, and to what extent? Over the past three decades, Parameterized Analysis has grown to be a mature field of outstandingly broad scope with significant impact from both theoretical and practical perspectives on computation.
In this talk, I will first give a brief introduction to the field of Parameterized Analysis. Additionally, I will zoom in to a specific result, namely, the first single-exponential time parameterized algorithm for the Disjoint Paths problem on planar graphs. An efficient algorithm for the Disjoint Paths problem in general, and on "almost planar" graphs in particular, is a critical part in the quest for the establishment of an Efficient Graph Minors Theory. As the Graph Minors Theory is the origin of Parameterized Analysis and is ubiquitous in the design of parameterized algorithms, making this theory efficient is a holy grail in the field.
Bio: Meirav Zehavi is an Associate Professor at the Department of Computer Science at Ben-Gurion University. She is a co-author of the book "Kernelization: Theory of Parameterized Preprocessing," published by Cambridge University Press in 2019. Meirav received her BSc, MSc and PhD in 2010, 2013 and 2015, respectively, from the Technion. She was a postdoctoral researcher at TAU, Berkeley, and Bergen University for two years. Her main research interests lie in the field of Parameterized Complexity and Algorithms. She is also interested in topics in Computational Geometry, Computational Social Choice, Approximation Algorithms, and Bioinformatics. She received an ERC grant, Alon Fellowship, Krill Prize, Toronto Prize, and Best Paper awards.
Title: Optimizing Virtual Camera Trajectories
Absract: We present two articles that come to solve the problem of providing good camera virtual trajectories. The motivation was the ability to generate virtual camera trajectories that capture the interesting subjects in a dynamic scene from good angles and distance while maintaining plausible camera movement .The problem is NP-hard in general as we wish to optimize several criteria in a 4 dimensional model. The first article, which was presented in EuroCG 21, describes an attempt to formulate the problem and devise a feasible practical approximation to the problem. In the second one which was recently accepted to CIAC 23, we constrain the problem to visiting a sequence of segments in 2D while minimizing the number of turns. We present a simple optimal solution where the segment orientation choices are discrete and conclude with possibilities for future improvements and generalizations.
Short bio: Eli Packer completed his BsC and MsC in computer science at Tel Aviv University with specialization in Computational Geometry under the supervision of Prof. Dan Halperin. He continued to PhD studies in computer science in the State University of New York at Stony Brook where he was supervised by Prof. J.S.B Mitchell. Eli has experience working in research and development at companies like Intel, IBM, Orbotech and Mobileye and teaching at Tel Aviv university and several colleges in Israel.
Speaker: Pablo Kogan
Title: Privacy Preserving Solution of DCOPs by Mediation
Abstract: In this study we propose a new paradigm for solving DCOPs, whereby the agents delegate the computational task to a set of external mediators who perform the computations for them in an oblivious manner. That is, the mediators perfectly simulate the operation of the chosen DCOP algorithm, but without getting access neither to the problem inputs nor to its outputs.
Based on a joint work with Tamir Tassa and Tal Grinshpoun
Short bio: Pablo is a Director of Engineering at QEDIT. He obtained his M.Sc. in computer science at the Open University of Israel under the supervision of Prof. Tamir Tassa.
Speaker: Tali Shitrit
Title: Deep Learning for Automated Recognition of Dog Emotional States based on Facial Expressions
Abstract: The motivation for our study lies in the gap between the intensive research of automated human affective states recognition (such as emotions and pain) as opposed to those exploring non-human animals. Affective states are closely linked to facial expressions in both human and non-human animals. For the latter, objective tools for facial expression analysis such as animalFACS are only beginning to be used. However, their use is time-consuming and requires special expertise and training, remaining also prone to human bias. Automation of facial analysis offers an exciting alternative, and is already addressed by a huge body of research in the human domain.
For non-human animals automation has so far been addressed for a few species in the context of pain, while emotional state recognition remains underexplored, especially in canine species due to the complexity of their facial expressions. We performed a comparative study of various deep learning models for classification of two ,experimentally induced, emotional states of dogs (frustration and positive anticipation), with the optimal model reaching above 89% classification accuracy.
We further investigated explainability aspects of the obtained deep learning models using heatmaps reflecting regions of focus of the network’s attention and discuss how these heatmaps may hold the key to novel insights on the sensitivity of the network to nuanced pixel patterns reflecting information invisible to the human eye.
Based on a joint work with Anna Zamansky and Dror Fried.
Short bio: Tali is a computer vision engineer . She obtained her M.Sc. in computer science at the Open University of Israel, under the supervision of Dr Dror Fried from the Open University and Prof Anna Zamansky from Haifa University .
Title: The KRW Conjecture
Abstract: Proving super-logarithmic lower bounds on the depth of circuits is one of the main frontiers of circuit complexity. In 1991, Karchmer, Raz and Wigderson observed that we could resolve this question by proving the following conjecture: Given two boolean functions f,g, the depth complexity of their composition is about the sum of their individual depth complexities. While we cannot prove the conjecture yet, there has been some exciting progress toward such a proof, some of it in the last few years. In this talk, I will survey the known results and propose future directions for research on the KRW conjecture.
Title: A solution to Ringel’s circle problem
Abstract:
In 1959, Gerhard Ringel posed the following problem: What is the maximal number of colors needed for coloring any collection of circles, no three tangent at a point, such that any two tangent circles get different colors?
The special case where the circles are non-overlapping was shown long ago to be equivalent to the celebrated 4-color theorem. The general case has remained open; it was only known that 4 colors are not sufficient. In this talk we show that no finite number of colors can suffice, by constructing collections of circles whose tangency graphs have an arbitrarily large girth (so in particular, no three are tangent at a point) and an arbitrarily large chromatic number. Our construction, which is one of the first geometric constructions of graphs with a large girth and a large chromatic number, relies on a (multidimensional) version of Gallai’s theorem with polynomial constraints, which may be of independent interest.
Joint work with James Davies, Linda Kleist, Shakhar Smorodinsky, and Bartosz Walczak
שם ההרצאה:
"ביג דאטה? ויקידאטה! עשור למיזם-האחות החשוב ביותר של ויקיפדיה"
תקציר ההרצאה:
כולם מכירים את ויקיפדיה, אך מעטים מכירים את ויקידאטה, מיזם האחות הצעיר של ויקיפדיה, אשר הוקם בסוף שנת 2012. ויקידאטה הוא מאגר עולמי, רב לשוני של נתונים מובנים ומקושרים (structure linked data) ונכון להיום הוא המאגר הסמנטי הגדול ביותר בעולם, עם מעל 100 מיליון פריטים.
בשנים האחרונות ויקידאטה הפך לנושא "חם", לא רק כי הוא קשור קשר ישיר לעתידה של ויקיפדיה, אלא כי הוא משנה את פני המחקר ופותח צוהר לאפליקציות ושימושים חדשים. היכולת לתשאל את מאגר הנתונים הזה, מאפשרת לנו לענות על שאלות שלא היה ניתן לקבל עליהן תשובה משמעותית בעבר, וכן להציג מידע באופן ויזואלי, ובכך הוא משנה את יחסי הגומלין בין האדם לידע.
ההרצאה תחל עם מבוא לויקיפדיה ומיזמי ויקימדיה, ותתמקד בוויקידאטה, לרבות הסיבות להקמת המיזם, מבנה הנתונים שלו, דוגמאות מעניניינות לשימושים במאגר מרחבי העולם, ודיון קצר בהשלכות והאתגרים הטכנולוגיים. כיוון שבאקדמיה (בכל העולם, אך במיוחד בארץ) מעטים מודעים לקיומו של המיזם, מטרת ההרצאה תהיה לחשוף את הסגל האקדמי והסטודנטים לאפשרויות החדשות שהפלטפורמה הזו טומנת בחובה כן ככלי לימודי, אקדמי ומחקרי, והן כבסיס לאפליקציות חדשות בתעשיה.
על המרצה:
ההרצאה תועבר ע"י שני אבנשטיין סיגלוב, אשת חינוך, מרצה, חוקרת, ופעילת תוכן חופשי ופתוח.
במסגרת פעילותה האקדמית שני עוסקת בחדשנות בהוראה ושילוב טכנולוגיות בהוראה ככלי ללמידה פעילה. מאז 2013 היא יוזמת ומעבירה קורסים אקדמיים העוסקים בוויקיפדיה וויקידאטה כפלטפורמות למידה והמחקר שלה עוסק בשילובן של ויקיפדיהומיזמיויקימדיה במסגרות חינוך ואקדמיה ככלי להרחבת מיומנויות, יצירת אימפקט חברתי וצמצום פערי ידע ופערים מגדריים. במסגרת הדוקטורט שלה בביה"ס לחינוך היא חוקרת רשתות סמנטיות כפלטפורמת למידה, ומתמקדת במקרה הבוחן של ויקידאטה.
היא העורכת הראשית של פרויקט בן-יהודה ויו"ר העמותה למחשוב ספרות עברית, התומכת בפרויקט; וכן מכהנת כסגנית יו"ר חבר הנאמנים של קרן ויקימדיה העולמית, אשר מפעילה את ויקיפדיה ומיזמי האחות שלה בכל העולם.
Title: Guarding Problems and k-Transmitter Watchman Routes
Abstract:
We will introduce guarding problems: the Art Gallery Problem (AGP) and its variants and the problem for a mobile guard—the Watchman Route Problem (WRP). In both cases, given a polygon P, we aim to see each point of P. In the AGP we want to place a minimum number of stationary guards, in the WRP we aim to find the shortest tour within P, from which each point in the polygon becomes visible.
We focus on the watchman route problem for a k-transmitter watchman: standing at point p in a polygon P, the watchman can see q ∈ P if pq intersects P ’s boundary in at most k connected components—q is k-visible to p. Travelling along the k-transmitter watchman route, either all points in P or a discrete set of points S ⊂ P must be k-visible to the watchman. We aim to minimize the length of the k-transmitter watchman route. We highlight some special properties of k-transmitter watchmen and show that even in simple polygons the k-transmitter watchman route problem for S ⊂ P is NP-complete and cannot be approximated to within a logarithmic factor—both if a starting point for the route is given and if no such point exists. Moreover, we present a polylogarithmic approximation for the k-transmitter watchman route problem for a given starting point and S ⊂ P with approximation ratio O(log^2(|S| · n) log log(|S| · n) 2 log |S|) (with |P | = n).
Title: Subspace Differential Privacy
Abstract:
Differential privacy allows the publication of data while preserving the privacy of individual data items. Many data applications have certain invariant constraints due to practical needs. Data curators who employ differential privacy need to respect such constraints on the sanitized data product as a primary utility requirement. Invariants challenge the formulation, implementation and interpretation of privacy guarantees. We propose subspace differential privacy, to honestly characterize the dependence of the sanitized output on confidential aspects of the data. We also discuss how to handle integer constraints when the output must be integral values. We demonstrate the application of the proposed methods for 2010 Census county-level demonstration data with mandated fixed state population totals.
This is joint work with Prathamesh Dharangutte, Ruobin Gong, Fang-Yi Yu.
Title: Notions of Symmetry in Formal Methods
Abstract:
I will give a brief introduction to Formal Methods, and present a line of works about symmetry -- an attempt to capture the notion that certain changes in the input should not affect the output of a system.
Specifically, I will discuss "process symmetry", where several players/agents/processes interact with a system, and we swap their identities (e.g., will you surf at the same speed if you connect to a different port on your router?)
No prior knowledge is assumed, posterior knowledge is guaranteed.
Short Bio:
Shaull Almagor is an assistant professor of Computer Science at the Technion. His research focuses on theoretical aspects of automata, quantitative automata, and robotic planning.
Check out Shaull's YouTube channel!
Title: Uncloneable Cryptography
Abstract: One of the peculiarities of quantum mechanics is the no-cloning theorem which, informally, says that quantum information cannot be copied. It turns out that this "lemon" can be squeezed into lemonade in the context of quantum cryptography. I'll discuss some of the main developments that occurred in the past decade in uncloneable cryptography, including, variants of quantum money which, is similar to cash, with the added advantage that it is digitally transferrable and provably unforgeable (based on some computation hardness assumptions); quantum copy protection for software; and uncloneable decryptors.
Short bio: Dr. Or Sattath is a computer scientist focusing on quantum cryptography, quantum computing, and theory of computer science. He is an assistant professor at Ben-Gurion University, and previously was a post-doc at the Hebrew University, MIT and UC Berkeley. He completed his PhD. at the Hebrew University under the supervision of Prof. Dorit Aharonov and Prof. Julia Kempe. He received the Simons research fellowship during his post-doc and a Clore scholarship for his PhD. studies.