We are an applied AI lab building end-to-end software agents. We're the team behind Devin, the first AI software engineer, and Windsurf, an AI-native IDE. These products represent our vision for AI that doesn't just assist engineers, but works alongside them as a genuine teammate.
Our team is small and talent-dense: world-class competitive programmers, former founders, and researchers from the frontier of AI, including Scale AI, Palantir, Cursor, Google DeepMind, and others.
Research moves at the speed of the infrastructure underneath it. Every training run, evaluation loop, and experimental iteration depends on systems that are fast, reliable, and built to scale. This role exists to make sure nothing in the stack becomes the bottleneck that slows down the frontier.
You will own the core systems that researchers depend on daily: distributed training infrastructure, experiment orchestration, data pipelines, and the tooling that turns raw compute into usable research velocity. This is not a support role. You will work directly alongside researchers, understand the science deeply enough to anticipate what they need next, and build systems that hold up under the pressure of training jobs running across thousands of GPUs. We don't distinguish between research and engineering; the best infrastructure engineers here are also the ones who understand why the research works.
Distributed Training Infrastructure: Build and own the systems that run large-scale training jobs reliably across GPU clusters. This includes job launchers, checkpointing and recovery, fault tolerance, and the monitoring that keeps researchers informed and unblocked.
Scaling Agent Rollouts: Own the infrastructure that runs hundreds of thousands of concurrent coding agent rollouts in VM sandboxes, from high-fidelity environment design to the distributed systems that hold up at our largest RL training scales.
Performance Optimization: Profile and improve training throughput end to end. Identify bottlenecks across data loading, communication overhead, memory utilization, and compute efficiency. Implement solutions that meaningfully improve step time and MFU at scale.
Experiment Orchestration and Tooling: Design and maintain the systems researchers use to launch, track, and analyze experiments. Reduce friction in the research loop so that more time is spent on ideas and less on waiting.
Data Pipeline Engineering: Build high-throughput, reliable data pipelines for training and evaluation. Ensure data quality, reproducibility, and efficiency at the scale our training runs demand.
Debugging and Reliability: Diagnose and resolve training failures across GPUs, networking, numerics, and data. Maintain detailed understanding of failure modes and build systems that fail gracefully and recover fast.
Parallelism and Systems Research: Implement and optimize parallelism strategies: data, tensor, pipeline, and sequence parallelism. Understand the tradeoffs deeply and apply them to get the most out of available hardware.
Scaling Infrastructure Ahead of Research: Anticipate what the research team will need next and build it before it becomes a constraint. The best infrastructure engineers here are proactive, not reactive.
Deep experience building and operating distributed training systems for large models; comfortable owning infrastructure end to end from the cluster level down to the training loop
Strong systems engineering fundamentals: distributed systems, networking, storage, and the ability to reason about performance across the full hardware-software stack
Proficiency in Python and C++; experience with PyTorch or equivalent deep learning frameworks at a systems level, not just API usage
Hands-on experience with GPU performance profiling, memory optimization, and compute efficiency; able to diagnose why a training run is underperforming and fix it
Experience implementing or optimizing parallelism strategies (data, tensor, pipeline, sequence) for large model training
Track record of building tooling and abstractions that meaningfully accelerate research workflows
Strong debugging instincts across complex, distributed systems where failures are non-deterministic and hard to reproduce
Enough ML knowledge to engage substantively with researchers: understand what they are training, why the architecture choices matter, and what the infrastructure needs to support
We care more about demonstrated capability than credentials. A PhD is one signal among many.
Small, highly selective team where research and product move together; prototypes reach real deployment quickly
You'll own and operate infrastructure running across thousands of GPUs; compute is not a constraint and neither is access to the systems you need to do the work well
The environment rewards speed, autonomy, and technical depth with minimal process overhead; this is one of the most competitive and fast-moving problems in AI
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Help architect and operate the systems that take neuroscience datasets from raw experiments through large-scale model training, evaluation, and optimized production inference at Metamorphic.
Lead cross-functional security initiatives at WEX to deliver enterprise-grade controls, manage risk, and drive measurable adoption of security improvements.
The Real Deal seeks a Full Stack Developer to build scalable, data-driven web applications and intuitive user experiences for its high-traffic real estate products.
An opportunity for a motivated student to join a development team as a Software Engineer Intern and work on Angular front-ends and C# backend services while leveraging AI development tools.
Experienced network automation engineer needed to build and maintain Python automation, NetBox integrations, and multi-vendor networking workflows for a client-facing engineering team.
Fonzi AI seeks a backend-leaning Senior Full Stack Engineer to build core TypeScript/Node.js systems and Next.js experiences that power a fast-growing AI recruiting marketplace.
Experienced Full Stack Developer needed to maintain and enhance WEBCANDID and TESTFLIGHT reporting tools, including on-call support for mission-critical operations.
Lead architecture and engineering efforts to design, build, and deliver scalable, containerized applications using Golang, JavaScript, and Python for mission-driven federal clients.
Help scale Chime's AI-powered Jade assistant by building platform tooling, backend services, and observability systems as a Senior Full-Stack Engineer.
Lead application and cloud security for a fast-growing AI EdTech platform, embedding with engineering teams to build secure-by-default systems and developer-friendly security workflows.
Lead the design and delivery of mission-critical, event-driven middleware for a private markets fintech platform while mentoring engineers and shaping backend engineering practices.
Ironclad is hiring an AI-native GTM Engineer to architect and deploy autonomous agent systems and integrations that automate end-to-end marketing workflows and drive measurable revenue impact.
NVIDIA's NVHPC compilers & tools group seeks a Senior HPC Performance Engineer to analyze and optimize high-performance applications across CPU and GPU architectures and guide compiler and application engineering improvements.
Help architect and operate the systems that take neuroscience datasets from raw experiments through large-scale model training, evaluation, and optimized production inference at Metamorphic.
Senior Software Engineer, Data Platform to own and scale Chime’s core data infrastructure—ETL/ELT frameworks, streaming pipelines, governance, and observability—across batch and streaming domains.
SpringRole is the first professional reputation network powered by artificial intelligence and blockchain to eliminate fraud from user profiles. Because SpringRole is built on blockchain and uses smart contracts, it's able to verify work experienc...
742 jobs