Jakub Grudzien Kuba
I am not talented. I don't believe in talent.
I believe in hard work. I believe we're all equal, capable of anything we want in life.
No, I am not talented. I am obsessed.
Conor McGregor
I am not talented. I don't believe in talent.
I believe in hard work. I believe we're all equal, capable of anything we want in life.
No, I am not talented. I am obsessed.
Conor McGregor
My name is Kuba. I am a 4th-year PhD student in AI at UC Berkeley, advised by Sergey Levine and Pieter Abbeel, and a student researcher at Meta.
My research is an interplay of reinforcement learning and generative modeling.
Specifically, at Berkeley, I work on offline model-based optimization. That is, I develop algorithms that enable optimization of arbitrary objects entirely with machine learning. The applications of such technology include protein optimization, drug discovery, and chip design. Ultimately, the goal of my efforts is to develop AI systems that can help us cure diseases and speed up our progress in science and engineering. So I think it's worth putting some effort in;)
At Meta, I work on reinforcement learning algorithms that improve AI agents in scenarios with limited training data. My interest stems from the fact that we are running out of data and I do not want it to become a problem stopping us from achieving super-intelligence.
I also really like quantitative finance. In the summer of 2024, I interned at Squarepoint Capital as a quantitative researcher and chased alphas. It was fun!
I was born and raised in Poland (the future football world champion), Eastern Europe. I graduated from Matex, Staszic High School in Warsaw. I did my undergrad at Imperial College London, UK, in Mathematics with Mathematical Computation. Over there, I worked with Yaodong Yang on multi-agent reinforcement learning and game theory. He was a great mentor and I strongly advise becoming familiar with his research. I completed my MSc in Statistics at University of Oxford, where I worked with Jakob Foerster, having one of the best times of my life. We worked on reinforcement learning theory and meta learning.