![]() |
OpenAI's cofounder, Greg Brockman, says engineers need "technical humility" to succeed at the company. Errich Petersen/Getty Images for SXSW |
For Greg Brockman, cofounder and president of OpenAI, the single most important quality for an engineer joining his company isn’t raw technical brilliance, a list of patents, or a decade of coding experience. It’s something far less flashy but, in his view, far more critical — technical humility.
Speaking at the AI Engineer World’s Fair in San Francisco on June 4, Brockman told the audience that success at OpenAI depends on a willingness to leave ego at the door. “You’re coming in because you have skills that are important,” he said in a recorded session later shared on AI Engineer’s YouTube channel. “But it’s a totally different environment from something like a traditional web startup.”
Brockman explained that this insight came from observing friction between two different types of talent: engineers and researchers. Engineers, especially those from startup backgrounds, are often accustomed to building within agreed-upon parameters — “We’ve agreed on an interface, I can implement it however I want” — while researchers tend to see every component as part of a highly interconnected system. In AI research, even the smallest bug can silently erode performance, making collaboration a delicate balance of autonomy and collective scrutiny.
In one early OpenAI project, the cultural gap led to gridlock. Engineers and researchers spent so much time debating each line of code that progress slowed to a crawl. Brockman’s workaround was pragmatic: he’d propose five solutions, the researcher would reject four, and they’d move forward with the one that remained. The experience reinforced his belief that engineers must know when to trust their instincts and when to defer. “The most important thing,” he said, “is to come in, really listen, and kind of assume that there’s something that you’re missing until you deeply understand the why. Then, at that point, great, make the change.”
A Culture Built for the Unknown
Brockman’s philosophy echoes a broader cultural approach at OpenAI, where leadership consistently emphasizes adaptability and open-mindedness over rigid expertise. Nick Turley, head of ChatGPT, described the company’s work as fundamentally different from most tech environments. “Approaching each scenario from scratch is so important in this space,” Turley said on Lenny Rachitsky’s tech podcast. “There is no analogy for what we’re building. You can’t copy an existing thing.”
This mindset, he noted, is what makes someone effective at OpenAI. The company can’t simply iterate on the blueprints of products from tech giants like Google or Instagram — it must invent from the ground up. That’s why, during hiring, OpenAI actively tests for the ability to ramp up in a completely unfamiliar domain and deliver results without a precedent to lean on.
According to OpenAI’s publicly available interview guide, the company prizes not just technical agility but also “collaboration, effective communication, openness to feedback, and alignment with our mission and values.” This combination ensures that employees can navigate the ambiguity and experimentation inherent in building world-leading AI systems.
Brockman’s Own Path
Brockman’s perspective is rooted in his own unconventional career. A trained software engineer, he dropped out of MIT to join Stripe in 2010, becoming its chief technology officer during the fintech startup’s rapid ascent. In 2015, he left Stripe to cofound OpenAI alongside Sam Altman, Elon Musk, and other prominent tech figures, with the mission of ensuring that artificial general intelligence benefits humanity.
In August 2024, amid a turbulent period of leadership changes and internal restructuring at OpenAI, Brockman took a three-month leave of absence. His return in November marked a shift to a renewed technical leadership role, underscoring his continued influence over the company’s engineering direction.
His message to incoming engineers is clear: building AI at the frontier isn’t about defending your code at all costs — it’s about listening, adapting, and learning until you understand the system well enough to make the right changes. In a field where the smallest decision can have massive downstream effects, humility isn’t just a personal virtue; it’s a technical necessity.