Metamorphic is developing new approaches to intelligence by combining machine learning with large-scale experimental neuroscience, informed by the principles that make the brain efficient, flexible, and robust. We are building foundation models trained on rich, continuous neural data — a high-resolution model of the brain at a scale never before possible.
Our founding team spans machine learning, neuroscience, and neurotechnology, with prior work including the MICrONS project, Neuropixels, and the Enigma project, as well as foundational scientific contributions in learning, neural computation, and generative modeling. Our work sits at the frontier of AI research, and we believe the highest-impact discoveries will come from researchers and engineers working as a single, tightly collaborative team.
The name Metamorphic reflects our belief that the next advances in intelligence will come from a change in form, beyond scale — from artificial to natural intelligence.
We are seeking Research Engineers to join our growing AI research team. You will be responsible for maximizing the training and inference performance of Metamorphic's foundation models, from quantization and low-precision training, to MoE routing optimization, to writing custom CUDA/Triton kernels for our novel architecture. This is a high-impact, technically deep role at the frontier of ML research and engineering. You will write and optimize GPU kernels, profile and eliminate performance bottlenecks, tune low-precision training strategies, and work closely with researchers to ensure architectural decisions translate to efficient and scalable implementations. You'll have substantial autonomy to shape foundational technical decisions on a small, high-impact team.
You'll thrive in this role if you:
Have significant software engineering experience and can move quickly without sacrificing rigor
Are able to balance research goals with practical engineering constraints
Are able to turn theory and practice, translating paper ideas into robust and performant implementations.
Get excited about the nitty gritty engineering details and incremental performance improvements that others gloss over.
Are happy to take on tasks outside your job description to support the team
Enjoy pair programming and deeply collaborative work
Are eager to learn more about machine learning research in a novel scientific domain
Are enthusiastic to work at an organization that functions as a single, cohesive team pursuing large-scale AI research
Have ambitious goals for AI progress and are excited to create the best outcomes over the long term
We offer:
The chance to work on one of the most scientifically consequential AI projects being pursued today
A small, world-class team where your contributions directly shape the science and the company
Competitive compensation and benefits, along with visa sponsorship
Strong mentorship and career development
$180,000 - $280,000 USD
Based on experience. We additionally offer a competitive equity package and comprehensive benefits, as well as visa sponsorship for international candidates.
Bachelor's degree or higher in Computer Science, Machine Learning, or a related field
Strong software engineering skills with a proven track record of building complex systems
Strong proficiency in CUDA, Triton, or similar, with demonstrated experience writing and optimizing GPU kernels
Hands-on experience with mixed-precision and low-precision training and a practical understanding of numerical stability tradeoffs
Deep knowledge of transformer architectures at the implementation level
Experience with MoE architectures: routing algorithms, load balancing, and the systems-level challenges of expert dispatch across GPUs
Hands-on experience with GPU profiling tools (Nsight Compute, Nsight Systems, PyTorch Profiler)
Experience integrating, customizing, and extending third-party high-performance libraries (FlashAttention, cuDNN, Triton, Quack, or similar) into production training stacks
Experience with CUTLASS, cuDNN APIs, and NCCL internals
Familiarity with inference optimization techniques and serving frameworks
Familiarity with diffusion models or multimodal model architectures
Experience with inference optimization techniques (KV-cache management, speculative decoding, post-training quantization) and serving frameworks (vLLM, TensorRT-LLM)
We encourage you to apply even if you do not believe you meet every single qualification. If you don't see a role that fits, we encourage you to submit a general application and tell us how you'd like to contribute to our mission.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Lead cross-functional security initiatives at WEX to deliver enterprise-grade controls, manage risk, and drive measurable adoption of security improvements.
Lead and scale Maxima’s content engine—strategy, writing, SEO, and AI-native execution—to build trust with senior accounting leaders and drive pipeline for an early-stage AI-for-accounting startup.
A paid summer Software Engineering Internship at Gen (NortonLifeLock) offering hands-on experience building and maintaining production code within a leading consumer cybersecurity organization.
Join Patreon's Identity & Access team to design and implement authentication, verification, and account-protection features that keep creators and their supporters safe and secure.
Superhuman seeks a Full-Stack Software Engineer to deliver scalable back-end services and rich front-end experiences as part of a hybrid engineering team empowering millions of users.
Experienced Full Stack Developer needed to maintain and enhance WEBCANDID and TESTFLIGHT reporting tools, including on-call support for mission-critical operations.
Work on high-impact screening and fraud-prevention systems at Fundrise, building reliable, scalable software that protects millions of users while partnering closely with Legal, Finance, and Operations.
NVIDIA is looking for a Senior Systems Software Engineer to build and operate Golang-based cloud platform services that enable large-scale Kubernetes-powered AI infrastructure.
Help architect and operate the systems that take neuroscience datasets from raw experiments through large-scale model training, evaluation, and optimized production inference at Metamorphic.
Senior Architect role to design and implement high-performance AI communication and memory libraries while driving hardware-software co-optimization across GPUs, DPUs, NICs, and interconnects at NVIDIA.
Build and own backend services, APIs, and customer-facing features for Astro Private Cloud to provision, configure, and operate Airflow environments at scale.
Software Engineer to develop and improve high-availability web services and apps for Trimble Maps, with an emphasis on strong coding, problem solving, and iterative delivery.
NVIDIA's NVHPC compilers & tools group seeks a Senior HPC Performance Engineer to analyze and optimize high-performance applications across CPU and GPU architectures and guide compiler and application engineering improvements.
Lead frontend teams to design and deliver scalable Angular applications for BetMGM, championing AI-assisted engineering practices to accelerate delivery and improve code quality.
Point72 is hiring a Machine Learning Infrastructure Engineer to build and operate scalable GenAI infrastructure that accelerates model development and production across cloud and on-prem environments.
SpringRole is the first professional reputation network powered by artificial intelligence and blockchain to eliminate fraud from user profiles. Because SpringRole is built on blockchain and uses smart contracts, it's able to verify work experienc...
742 jobs