Senior AI Engineer, GenAI & ML Evaluation Frameworks
Company: Grafana Labs
Location: Campbell
Posted on: January 7, 2026
|
|
|
Job Description:
Grafana Labs is a remote-first, open-source powerhouse. There
are more than 20M users of Grafana, the open source visualization
tool, around the globe, monitoring everything from beehives to
climate change in the Alps. The instantly recognizable dashboards
have been spotted everywhere from a NASA launch and Minecraft HQ to
Wimbledon and the Tour de France. Grafana Labs also helps more than
3,000 companies including Bloomberg, JPMorgan Chase, and eBay
manage their observability strategies with the Grafana LGTM Stack,
which can be run fully managed with Grafana Cloud or self-managed
with the Grafana Enterprise Stack, both featuring scalable metrics
(Grafana Mimir), logs (Grafana Loki), and traces (Grafana Tempo).
We’re scaling fast and staying true to what makes us different: an
open-source legacy, a global collaborative culture, and a passion
for meaningful work. Our team thrives in an innovation-driven
environment where transparency, autonomy, and trust fuel everything
we do. You may not meet every requirement, and that’s okay. If this
role excites you, we’d love you to raise your hand for what could
be a truly career-defining opportunity. This is a remote
opportunity and we would be interested in applicants from USA time
zones only at this time. Senior Engineer – GenAI & ML Evaluation
Frameworks The Opportunity: At Grafana, we build observability
tools that help users understand, respond to, and improve their
systems – regardless of scale, complexity, or tech stack. The
Grafana AI teams play a key role in this mission by helping users
make sense of complex observability data through AI-driven
features. These capabilities reduce toil, lower the barrier of
domain expertise, and surface meaningful signals from noisy
environments. We are looking for an experienced engineer with
expertise in evaluating Generative AI systems, particularly Large
Language Models (LLMs), to help us build and evolve our internal
evaluation frameworks, and/or integrate existing best-of-breed
tools. This role involves designing and scaling automated
evaluation pipelines, integrating them into CI/CD workflows, and
defining metrics that reflect both product goals and model
behavior. As the team matures, there’s a broad opportunity to
expand or redefine this role based on impact and initiative. What
You’ll Be Doing: • Design and implement robust evaluation
frameworks for GenAI and LLM-based systems, including golden test
sets, regression tracking, LLM-as-judge methods, and structured
output verification. • Develop tooling to enable automated,
low-friction evaluation of model outputs, prompts, and agent
behaviors. • Define and refine metrics for both structure and
semantics, ensuring alignment with realistic use cases and
operational constraints. • Lead the development of dataset
management processes and guide teams across Grafana in best
practices for GenAI evaluation. What Makes You a Great Fit: •
Experience designing and implementing evaluation frameworks for
AI/ML systems. • Familiarity with prompt engineering, structured
output evaluation, and context-window management in LLM systems. •
High autonomy to collaborate and translate team goals into clear,
testable criteria supported by effective tooling. Bonus Points For:
• Experience working in environments with rapid iteration and
experimental development. • A pragmatic mindset that values
reproducibility, developer experience, and thoughtful trade-offs
when scaling GenAI systems. • A passion for minimizing human toil
and building AI systems that actively support engineers.
Compensation & Rewards: In the United States, the Base compensation
range for this role is USD 154,445 - USD 185,334. Actual
compensation may vary based on level, experience, and skillset as
assessed in the interview process. Benefits include equity, bonus
(if applicable) and other benefits listed here. All of our roles
include Restricted Stock Units (RSUs), giving every team member
ownership in Grafana Labs success. We believe in shared
outcomes—RSUs help us stay aligned and invested as we scale
globally. *Compensation ranges are country specific. If you are
applying for this role from a different location than listed above,
your recruiter will discuss your specific market’s defined pay
range & benefits at the beginning of the process. Why You’ll Thrive
at Grafana Labs: • 100% Remote, Global Culture - As a remote-only
company, we bring together talent from around the world, united by
a culture of collaboration and shared purpose. • Scaling
Organization – Tackle meaningful work in a high-growth,
ever-evolving environment. • Transparent Communication – Expect
open decision-making and regular company-wide updates. •
Innovation-Driven – Autonomy and support to ship great work and try
new things. • Open Source Roots – Built on community-driven values
that shape how we work. • Empowered Teams – High trust, low ego
culture that values outcomes over optics. • Career Growth Pathways
– Defined opportunities to grow and develop your career. •
Approachable Leadership – Transparent execs who are involved,
visible, and human. • Passionate People – Join a team of smart,
supportive folks who care deeply about what they do. • In-Person
onboarding - We want you to thrive from day 1 with your fellow new
‘Grafanistas’ to learn all about what we do and how we do it. •
Balance is Key - We operate a global annual leave policy of 30 days
per annum. 3 days of your annual leave entitlement are reserved for
Grafana Shutdown Days to allow the team to really disconnect. *We
will comply with local legislation where applicable.
Keywords: Grafana Labs, Vallejo , Senior AI Engineer, GenAI & ML Evaluation Frameworks, IT / Software / Systems , Campbell, California