I have joined Google, Image Search Ranking Team, as of May 2019.
Before Google, I was an Applied Research Scientist at Data Science and Machine Learning Team at Expedia, where I worked on designing scalable optimization algorithms and machine learning models for online auctions and search ranking. I am enthusiastic about applying deep learning techniques and reinforcement learning for different problems in related domains.
I did my Ph.D. in School of Computing at the University of Utah and a was a graduate member of The Data Group. I worked under supervision of Suresh Venkatasubramanian. In addition, I was fortunate to collaborate with Jeff M. Phillips, Justin Thaler, and Aditya Bhaskara.
I was a visiting research scholar at Simons Institute for the Theory of Computing at University of Berkeley for the research program on Theoretical Foundations of Big Data Analysis during 2013-2014. I was also a research intern at Search and Content Science Team at Yahoo Research with Kostas Tsioutsiouliklis and Stergios Stergiou during summer 2016.
I did my Bachelor on Computer Science at Sharif University of Technology in Iran, where I had Dr. Ebadollah S. Mahmoodian as my advisor. As my B.S. thesis, I worked mainly on Algorithms on Massive Graphs and Complex Networks.
Research Sketch and Projects (Ph.D.)
I successfully defended my PhD dissertation titled "Sublinear Algorithms for Massive Data Computations: Streaming and Coresets" on May 2017.
My PhD research was focused on designing algorithms for problems motivated from massive data analysis: focusing on resources like time, space, and communication, as well as accuracy. I also studied algorithmic design techniques in modern data models and explored how randomization and approximation can be utilized as a powerful resource for developing beautiful, simple, and efficient algorithms with provable theoretical and performance guarantees.
In particular, I focused on research problems in the general area of sublinear algorithms such as streaming, core sets and sublinear verification with a special interest in problems arising from data analysis including data summarization, clustering, matrix problems and massive graphs.
Sublinear Algorithms for Massive Data Computations
In recent years, there has been increased interest in the design and analysis of algorithms for massive data sets that occur more frequently in various applications. Managing and analyzing such data sets requires reconsidering the traditional notions of efficient algorithms: the classic algorithmic models do not provide accurate means for modern large-scale applications and even linear cost algorithms can be too slow and expensive. Hence, there is the desire to develop algorithms whose complexities are not only polynomial, but in fact are sublinear in input size n, which are called sublinear algorithms.
Constructing coresets is a technique for designing sublinear algorithms. This technique summarizes a large data set with a proxy set of potentially much smaller size that can guarantee error for certain classes of queries. Here the assumption is often that we have offline access to the data; however, due to the increasing size of the data this assumption may not be valid. This motivates study on the streaming algorithms, where data arrives in a streaming fashion and the goal is to extract only a small amount of information about the input (a “sketch”) for computing the approximate answer to the query. However, it is shown that many problems (especially on graph data) are intractable in the streaming setting and require a prohibitive amount of space or number of passes over the data. This motivates considering a relaxation of the standard streaming model, called streaming interactive proofs, in which a powerful third party (prover) assists with the computations to reduce the required memory while still verifying correctness by a sublinear amount of communication.
This research project consists of 3 main research components as follow:
 Coresets and Summarization for High-Dimensional Probabilistic Data
One of the most popular approaches to processing massive data is to first extract a compact representation (or synopsis) of the data and then perform further processing only on the representation itself, obtaining a good approximation to the original input. This approach significantly reduces the cost of processing, communicating and storing the data. Examples of this approach include techniques such as sampling, sketching and coresets. Coresets present a new approach to optimization in general and have huge success especially in tasks which use prohibitively large computation time or space. Furthermore, coresets offer a way to obtain algorithms that work in models such as distributed computing and the streaming model, where the space used by the algorithm must be significantly smaller than the input size: simply compute and store a coreset (typically, a coreset is much smaller than the input size), and then run a more expensive algorithm on the coreset rather than on the original input.
 Streaming Algorithms and Sketching for Massive Graphs
Large scale graphs are now a widely used tool for representing real world data. Many modern applications, such as search engines or social networks, require supporting various queries on large scale graphs efficiently. It is possible to store a massive graph on a large storage device, but the random access in these devices are often quite slow and the computation costs will be expensive. To overcome the challenges which arise by computation on massive graphs, an important approach is maintaining a succinct representation that preserves certain properties of the graph (i.e., coreset). Another popular approach is to process such large graphs in the data stream model using a limited amount of space. The core task here is to construct a synopsis data structure which is easy to construct in streaming fashion while also yielding good approximation of the properties of the graph data. Many of these synopses are constructed using sketching (computing a linear projection of the data) and sampling techniques.
 Streaming Interactive Verification for Massive Data and Graphs
One of the main challenges in streaming computation on massive data is designing algorithms for potentially difficult problems in which computation or space requirements are prohibitive under the streaming model. In this case, we can consider outsourcing the storage and processing of the data stream to a more powerful third party, the cloud, but the data owner would like to be assured that the desired computation has been performed correctly. In this setting, the resource-limited verifier (data owner) sees a data stream and tries to solve the problem with the help of a more powerful prover (cloud) who sees the the same stream. Here the goal is to develop efficient (in terms of communication and space) interactive proof protocols for verifying computations which are streaming in nature and explore how interaction and communication can be helpful in attacking problems in data analysis as well as classic and fundamental problems in geometric and combinatorial algorithms and optimization under streaming inputs.
Sublinear Algorithms, Streaming:
- Sublinear Algorithms for Graph Maxcut and Correlation Clustering. (with Aditya Bhaskara and Suresh Venkatasubramanian)
The 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)
- Efficient Streaming Verification Algorithms for Graph Data Streams. (with Amirali Abdullah and Justin Thaler and Suresh Venkatasubramanian and Chitradeep Roy)
Manuscript (Journal Version)
- Streaming Verification of Graph Properties. (with Amirali Abdullah and Suresh Venkatasubramanian and Chitradeep Roy)
The 27th International Symposium on Algorithms and Computation (ISAAC 2016)
- Streaming Interactive Verification in Data Analysis. (with Justin Thaler and Suresh Venkatasubramanian.)
The 26th International Symposium on Algorithms and Computation (ISAAC 2015)
29th Annual ACM Symposium on Computational Geometry (SoCG). June 2013
ACM XRDS Magazine. Spring 2014
- You're Crossing the Line: Localization Border Crossings using Wireless RF Links. (with Peter Hillyard, Neal Patwari and Suresh Venkatasubramanian.)
- IEEE Signal Processing and SP Education Workshop (SPW-2015)
- Track Estimation Using Link Line Crossing Information in Wireless Networks. (with Peter Hillyard, Neal Patwari and Suresh Venkatasubramanian.)
- Proc. of 2013 IEEE Global Conference on Signal and Information Processing, December 3-5, 2013, Austin, Texas.
Technical Reports (Undergraduate Research)
- A Survey on Search Strategies in Dynamical Complex Networks.
- BS Thesis in Persian. Spring 2010
- On the Security of the Reduced Round Salsa20 Stream Cipher from Viewpoints of Differential and Guess-and-Determine Attacks. (with Taraneh Eghlidos)
- Technical Report, Research Proceedings, Electronics Research Institute, Sharif University of Technology, Tehran, Iran, 2011
- An Inside to the Design Methods of the eSTREAM Candidates. (With Taraneh Eghlidos)
- Technical Report, Research Proceedings, Electronics Research Institute, Sharif University of Technology, Tehran, Iran, 2010
- [May 2019] I started working at Google, Image Search Ranking Team.
- [Oct 2017] I joined Machine Learning and Data Science team at Expedia, as an Applied Research Scientist.
- [May 2017] I successfully defended my PhD dissertation titled "Sublinear Algorithms for Massive Data Computations: Streaming and Coreset".
- [Summer 2016] I did a Research Internship at Search and Content Science Team at Yahoo! Labs, at Silicon Valley, California.
- [January 2016] We presented our recent work on Streaming Verification of Graph Problems at Sublinear Algorithms Workshop at John Hopkins University.
- [Fall 2013 - Fall 2014] I'm visiting research scholar at Simons Institute for the Theory of Computing at University of Berkeley.
- [January 2015] I attended SODA15 and wrote a long blog post on Streaming session of the conference published in Geomblog in two parts: One and Two.
- [April 2014/2015] I got student travel grants from ACM SIGACT to attend STOC2014 and STOC2015.
- [October 2013] Attended at the Theoretical Foundations of Big Data Analysis program at Simons Institute for the Theory of Computing.
- [Summer 2013] I gave my first conference talk at SOCG13 and visited PUC and UniRio at the fabulous city of Rio de Janeiro.
- [February 2013] My first paper in Ph.D. Range Counting Coresets for Uncertain Data" got accepted at SOCG 2013.
- [October 2012] I got Travel Grant for attending Grace Hopper 2013 Celebration of Women in Computing in Minneapolis, MN.
- [October 2012] I got Travel Grant for attending Grace Hopper 2012 Celebration of Women in Computing in Baltimore, MD.
- [June 2012] I got Travel Grant for attending The Women in Theory (WIT) Workshop at Princeton University. I had some posts in my adviser's blog on the technical and non-technical sessions during this event.