Existential Crisis Turned into New Year's Resolution

In 2024, moving halfway across the world to live and study in a completely new country was surprisingly one of the easiest and most obvious decisions for me. As easy as the life decisions I made in 2022, when I joined a CS lab as an intern; in 2021, when I committed to KAIST and majored in CS; and in 2018, when I chose to write code all day instead of studying for the Korean SAT. Now, even though I’m probably standing at the most successful point in my life so far, with five years of guaranteed funding for my studies, no decision feels easy or obvious anymore. Instead, my PhD has progressively felt like crossing a river full of dangers, where a single missed step can be deadly. I should admit that some of the “deadly” futures I fear right now are actually things 15-year-old me might have dreamed of, e.g., being a SWE at any respectable company. 15-year-old me, reading AlphaGo paper when it was fresh, also pondered becoming someone like one of the authors of the paper, but that felt like a million years away. Decisions were easy back then because they were still toward intermediate states; now they are hard because they are very close to the end goal.

But I feel this is more than a simple phase change in life, a mid-20s crisis, or a mid-PhD crisis. I think this dread is definitely amplified by the AI productivity explosion. The future feels more unpredictable and depressing because I feel that the expectations and qualifications for a good researcher have changed, and I’m worried about whether I’m keeping up with them. Two years ago, when I was admitted to a PhD program, I firmly believed that being a good engineer myself, shipping things fast, would be a great advantage. But now I think that advantage is marginal. Despite there being more papers submitted to NeurIPS than ever, I believe we won’t need that many ML researchers. First, one researcher with good foundations and AI agents in hand can do much more than multiple teams of multiple researchers. Second, more specific to my research area, we want more life scientists than ML researchers who (barely) understand biology. I might be wrong, but I’m noticing that recent job postings, hirings, and founders of top bio+ML research teams are oriented more toward biological backgrounds, which makes sense because those are the people who can ask good questions. Training neural networks and coding have become easier than ever. The cost of solving problems and executing will scale with the growth of AI, but the cost of asking good questions won’t. My guess is that we’ll only need a handful of ML researchers to provide perspectives that are essential to solving problems, and they do not necessarily need to come from the bio+ML field—just as Demis did not come from a protein folding background before being introduced to the problem (yes, I watched the documentary). If I fell into the first category, I wouldn’t be as anxious, but the thing is that four years into this field, I still don’t see myself as a biologist. I still see myself as an ML researcher who is intrigued by one of the most interesting data modalities, complicated functions that are hard to approximate, and pipelines that are hard to optimize. I don’t necessarily see this as a flaw, but I’m sure it puts me in tough competition. If my preference is being a problem solver with a focus on ML, I’m definitely competing (I don’t want to use the word “compete,” but I feel like there are limited positions that can make meaningful contributions to a field, so sadly it is a competition) with people who are working on VLAs, LLM reasoning, and the next frontier of generative models. And obviously, the bar is high because those people are doing some crazy things.

While this reflection burst out of anxiety, identifying the cause of that anxiety weirdly made me feel better and more optimistic. Since I know that expectations have changed, I can cater to them. I can adjust while I’m still young and early in my career, and move fast. A few New Year’s resolutions:

  1. Translate the productivity increase of the SW world to my research. Who am I to say that “AI agents are not reliable enough to power research” when Karpathy delegates most of his nanoGPT experiment to AI agents? I recently adopted Claude Code to my workflow, and it definitely changed me for the better.
  2. Read more, think more, and still try to ask good questions. Somewhat self-contradictory, but yes, the distinction between problem solving and asking good questions is not clear-cut, just as the boundary between ML for science and science using ML is not clear-cut. No one said I’m disqualified from asking good questions in biology because I didn’t take a single biology class in undergrad. I also think I’m lucky enough to be offered many interesting questions by my friends and collaborators, so developing good taste in questions might be one way forward.
  3. Be more intentional in the projects I do. Working on projects that merely apply some ML idea to some biology problem (quickly enough to get a paper in a conference) won’t take me anywhere, because I neither asked the question nor came up with the solution.

Translating these into more actionable items (after the ICML deadline… 😅):

  1. I will seriously spend time catching up with AI (coding) agents and developing a research workflow.
  2. I will always post a summary of a paper when I share it in the paper recommendation/update Slack channel of our lab. I’m borderline spamming that channel, and I feel it’s both my responsibility to post a summary and a good habit that forces me to read more.
  3. I will study the foundations, mostly math. No concrete plans yet.
  4. Hopefully, submitting to ICML will mark the end of some of my projects and give me the capacity to work on new ones. I have some ambitious-ish ideas—I will talk to Mohammed and put at least 50% of my time into these new project(s).