Because of the hardware lottery, artificial intelligence models continuously get bigger and bigger, but these models also take a lot of time and energy to train and run during deployment. Our past research works on accelerating AI models, such as Reinforcement Learning, Transformer Networks, Recommendation Systems, and Few-Shot Learning.
DNA sequencing has sped up a lot in the past decades but the speed of analyzing these sequences is currently a major bottleneck. Our research works on accelerating bioinformatics tasks such as read mapping, de novo algorithm, and DNA matching.
As we approach the era of big data, we focus on accelerating search applications on unstructured data such as text, images, and videos.
One of the major bottlenecks of emerging applications is the Von Neumann bottleneck or the memory wall. Our research works on creating architectures and circuits that combine logic and memory within the same array. Examples of these in-memory circuits are content addressable memories (CAMs), crossbar arrays, and configurable memory arrays.
Reinforcement deals with how intelligent agents choose their actions in an environment to maximize the reward.