Skip to main content
G

Machine Learning Engineer (LLM inference)

GMI Cloud

Location

Mountain View, CA

Salary

Not specified

Type

fulltime

Posted

Today

via linkedin

Job Description

MLE (LLM inference)

About US

GMI Cloud

is a fast-growing AI infrastructure company backed by Headline VC and one of only six cloud providers worldwide to earn NVIDIA’s prestigious Reference Platform Cloud Partner designation . We operate 8 of our own GPU clusters across the U.S. and Asia, delivering a full spectrum of services from GPU compute service to AI model inference API solutions. As an NVIDIA Reference Platform Cloud Partner, our infrastructure meets the highest standards for performance, security, and scalability in AI deployments. We empower AI startups and enterprises to “build AI without limits,” providing everything they need to prototype, train, and deploy AI models quickly and reliably.

About this role

We are hiring a Machine Learning Engineer, LLM Optimization to build a world-leading inference optimization team and make GMI Cloud the industry benchmark for LLM serving performance.

This role is for engineers who want to work at the frontier of AI systems. You will drive the research, validation, and productionization of the most advanced inference optimization techniques, and turn them into real competitive advantage across GMI’s inference platform.

Our goal is to make GMI the company that leads the industry in how fast we discover, evaluate, combine, and operationalize the best optimization strategies for real customer workloads. That means not only adopting the latest advances, but also defining best practices, developing our own optimization methodologies, and building the internal framework that keeps GMI ahead of the curve.

You will focus on B200-first optimization, with support for H200 evolution, across core domains including quantization, speculative decoding, KV cache and memory management, prefill/decode disaggregation, and system-level inference optimization. You will work closely with platform and infrastructure teams to transform cutting-edge ideas into measurable gains in latency, throughput, cost efficiency, and production scalability.

Key Responsibilities

  • Drive frontier research and engineering in LLM inference optimization, building GMI’s industry-leading capabilities in performance, efficiency, and scalability.
  • Develop next-generation optimization strategies for large-scale LLM serving across model execution, runtime systems, and production inference platforms.
  • Advance state-of-the-art techniques in

quantization and precision optimization

to improve throughput, latency, memory efficiency, and cost-performance across modern GPU systems.

  • Push the frontier of

speculative decoding

and related acceleration methods, including both systems and model-level approaches for faster generation.

  • Lead innovation in

KV cache and memory optimization

, improving long-context serving efficiency, memory utilization, and multi-tenant performance.

  • Develop advanced architectures for

prefill/decode disaggregation

and other distributed inference optimization strategies for large-scale production environments.

  • Drive

system-level optimization

across scheduling, batching, routing, gateway orchestration, adapter serving, and end-to-end inference efficiency.

  • Build scalable optimization frameworks, performance methodologies, and engineering practices that allow GMI to stay ahead of the industry as models, hardware, and serving patterns evolve.
  • Turn cutting-edge optimization ideas into production-ready capabilities that improve real-world customer workloads across latency, throughput, quality, and cost.
  • Collaborate closely with platform, infrastructure, and product teams to make inference optimization a core technical advantage of GMI Cloud.

Required Skills

  • Strong hands-on experience with

LLM inference systems

and performance optimization.

  • Solid understanding of inference metrics and tradeoffs, including

TTFT, ITL, throughput, goodput, tail latency, GPU utilization, memory efficiency, and quality/cost tradeoffs

.

  • Experience with one or more modern serving stacks such as

SGLang, vLLM, TensorRT-LLM, Triton,

or similar systems.

  • Deep familiarity with

GPU-based inference

, model serving architecture, and production bottlenecks around compute, memory bandwidth, KV-cache behavior, and scheduling.

  • Strong experimentation skills: able to design benchmarks, interpret results, debug regressions, and produce actionable conclusions rather than isolated microbenchmark wins.
  • Comfortable working across research-style validation and production engineering, with a bias toward measurable impact in real customer scenarios.
  • Strong coding and systems skills in

Python

, with practical experience in profiling, observability, and performance debugging.

  • Clear communication skills and the ability to explain technical tradeoffs to both engineers and cross-functional stakeholders.

Preferred Qualifications

  • 1\+ years of hands-on experience in

LLM inference optimization

,

ML systems optimization

, or closely related areas.

  • Experience working on optimization for large-scale model serving, such as latency reduction, throughput improvement, memory efficiency, or cost-performance tuning.
  • Familiarity with one or more major areas of inference optimization, including

quantization

,

speculative decoding

,

KV cache optimization

,

prefill/decode disaggregation

, or

system-level serving optimization

.

  • Experience with modern LLM serving stacks, GPU inference systems, or production ML infrastructure is a strong plus.

Looking for more opportunities?

Browse thousands of graduate jobs and entry-level positions.

Browse All Jobs