AIスタートアップに飛び込んだ新卒エンジニアの挑戦 (1).png

Mark joined JAPAN AI in 2024. After graduating from the University of the Philippines, he worked as a research engineer for a Japanese company developing AI algorithms, and later worked for a consulting firm providing AI solutions for Japanese companies. Currently, as a research team leader, he is mainly involved in LLM modeling and team management. We asked him about his attitude as an engineer and his thoughts on his team.

ーーPlease provide a brief self-introduction and your background.

After graduating from the University of the Philippines in 2015, I worked for about 4 years as a research engineer in a Japanese company, solving various problems using AI algorithms. After that, I worked for a consulting company providing AI solutions for Japanese companies. I decided to change my job to do more AI and joined JAPAN AI in 2024.

ーーWhat aspects of JAPAN AI resonated with you and led you to join the company?

I had been involved in image generation projects before joining the company and had a strong interest in generative AI. 2024, when I joined JAPAN AI, there were many AI startups, but when I talked with the CTO of GENIEE during the interview, I had a strong impression that I could do some research on generative AI here. The research environment was very attractive, and I decided to join the company without hesitation. The excitement of the team was also significant, and the environment was easy to talk to.

ーーWere there any particularly memorable tasks or projects you've worked on since joining the company?

When I first joined JAPAN AI, I was involved in developing a speech-to-speech system that enabled real-time conversations with AI. At the time, we were tackling the core challenge of reducing latency to make the experience feel as natural and fluid as possible. Just as we were making progress, OpenAI released their own real-time API, which outperformed our solution in terms of response speed. While that shift redefined the landscape, our work didn’t stop there—many of our clients require customization and domain-specific tuning that the general-purpose OpenAI API cannot provide. That continued need for flexibility keeps our efforts both relevant and rewarding. These days, my focus has shifted more toward large language model (LLM) development, but I still look back on that project as a formative and exciting part of my journey.

ーーPlease tell me about the development work your team is doing and your role within the team.

The mission of our research team is to evolve JAPAN AI into a company that not only builds powerful applications, but also advances in-house model development. To support this vision, one of our core focuses is curating high-quality data and building robust evaluation frameworks—both of which are fundamental for the successful post-training and alignment of large language models (LLMs). These efforts lay the groundwork for our future modeling capabilities and directly support the product teams working on cutting-edge AI solutions.

As a team leader, I’m deeply invested in both the mission and the people driving it. I pay close attention to ensuring that each member's tasks are aligned with our strategic goals. Beyond technical direction, I care about fostering team growth—supporting each member’s development, managing task planning, and shaping a structure that enables sustainable innovation. I’m fortunate to work with an exceptionally talented group of individuals, whose skills and dedication make leading this team both rewarding and inspiring.

ーーWhat are the most technically challenging parts of your work? And how do you tackle them?

One of the most technically challenging aspects of working in this field is keeping up with the rapid pace of LLM research. It’s not enough to simply stay updated — we must continually ask ourselves whether the directions we're pursuing remain meaningful and worth the investment. These two aspects are deeply connected!

Before diving into detailed implementation, we always take a step back to calmly and logically evaluate whether a particular method or idea is worth pursuing. This often involves in-depth team discussions. Since implementation demands significant time and resources, we’re careful not to jump on trends that lack long-term value. Instead, we prioritize thoughtful, technically grounded decisions that are aligned with both our goals and potential impact.

To stay ahead, we track the latest research trends daily, carefully select the most promising papers, and repeatedly go through cycles of implementation and evaluation. At the same time, we remain mindful of the balance between ambitious research and practical product quality — a balance that naturally creates a productive tension in our day-to-day work.

Our team is still in its early stages, but I have strong confidence that we’re on a path to make a meaningful impact — not only through the products we build, but also in helping to shape the future of the AI ecosystem in Japan.

ーーPlease describe your typical daily workflow.

I usually start my day with the most mentally demanding tasks, when my focus is sharpest. Lately, that means working on our internal LLM evaluation harness and curating datasets for evaluating both our LLMs and agent systems.

As part of my daily routine, I keep a close eye on the latest research trends — scanning newly released papers on arXiv, checking relevant GitHub repositories, and following discussions around emerging techniques. From the flood of information, I narrow down the most promising works through careful reading and prioritization. To streamline this process, I use AI tools such as our own deep research agent at Japan AI, and I also build internal tools (via our own agent system, which is pretty neat!) that help effectively summarize and surface key insights from papers — many of which are integrated directly into our own product.

When needed, I check in with team members to offer support or align on direction. We also hold regular meetings to share updates and stay on the same page. While continuing development work, I’m always learning — steadily deepening my understanding of LLMs in an environment that requires both technical focus and adaptability.