Skip to main content
T

Security Architect – AI / ML / LLM / Agentic Systems

The Linux Foundation

Location

Remote, US

Salary

$170,000 - $185,000 /yearly

Type

NaN

Posted

Today

Remote
via indeed

Job Description

###### Company Description

The Linux Foundation is a 501(c)(6) non-profit that provides a neutral, trusted hub for developers and organizations to code, manage, and scale open technology projects and ecosystems. The Open Source Security Foundation (OpenSSF) is a cross-industry organization at the Linux Foundation that brings together the industry’s most important open source security initiatives and the individuals and companies that support them. The OpenSSF is committed to collaboration and working both upstream and with existing communities to advance open source security for all.

###### Job Description

We are looking for a Security Architect specializing in AI, Machine Learning, Large Language Models (LLMs), and Agentic Systems to help secure the next generation of open source software and AI-enabled systems. This role will provide technical leadership and architectural guidance across the OpenSSF ecosystem, focusing on emerging security risks introduced by AI/ML pipelines, foundation and open-weight models, autonomous agents, and their integration into modern software supply chains. Additionally, the Security Architect will provide leadership for the OSS-CRS (Open Source Software Cyber Reasoning System) project: a standardized infrastructure for building, running, and evaluating Cyber Reasoning Systems (CRS) that perform automated vulnerability discovery and remediation in open source software.

The ideal candidate operates at the intersection of deep technical expertise, open source collaboration, and global risk management, helping translate cutting-edge AI security challenges into practical, adoptable guidance for maintainers, enterprises, and regulators.

###### Responsibilities

  • Develop reference architectures, threat models, and security design patterns for AI/ML/LLM and agentic systems used in open-source software
  • Identify and analyze security risks across the full AI lifecycle, including: Data sourcing, curation, and training; Model development, fine-tuning, and evaluation; Deployment, inference, and runtime operation Agent-to-agent and agent-to-tool interactions;
  • OSS-CRS project leadership including: Collaborate with maintainers on project roadmap; Testing and end-user guidance on operation of OSS-CRS tools; Participate in Coordinated Vulnerability Disclosure process for issues discovered with OSS-CRS tools; Encourage project adoption with security engineers, software developers, Open Source Program Offices and open source project maintainers;
  • Serve as a technical advisor to OpenSSF working groups and initiatives: Collaborate with maintainers and contributors to ensure guidance is practical, scalable, and aligned with open-source realities;
  • Support the development of community-driven guidance, tooling recommendations, and best practices
  • Standards, Policy, and Ecosystem Alignment
  • Align OpenSSF AI security guidance with relevant frameworks and standards, including: NIST Secure Software Development Framework (SSDF); NIST AI Risk Management Framework (AI RMF); ISO/IEC 27000-series and emerging AI standards; EU AI Act
  • Help translate regulatory and policy expectations (e.g., product security and software assurance requirements) into actionable technical controls
  • Engage with industry, academia, and government stakeholders to promote consistent, interoperable AI security approaches
  • Thought Leadership \& Education: Author whitepapers, technical reports, and architectural guidance for public release; Present OpenSSF AI security work at conferences, workshops, and community events; Help educate developers, maintainers, and security teams on secure-by-design AI practices
  • Travel: up to 20%

###### Qualifications Prerequisites:

  • 10\+ years of experience in software, cloud, or systems security with a focus on architecture/leadership
  • Hands-on experience securing AI/ML systems
  • Deep understanding of ML pipelines and infrastructure; LLMs, prompt engineering, and retrieval-augmented generation (RAG); Agentic and autonomous system architectures; Experience securing cloud-native and distributed systems commonly used for AI workloads

###### Desirable Skills and Background:

  • Security Architecture \& Risk Management
  • Threat modeling complex systems
  • Designing security architectures and controls
  • Applying secure SDLC practices in modern environments
  • Strong understanding of identity, access control, secrets management, and isolation in AI contexts
  • Open Source \& Collaboration: Demonstrated experience working with open-source communities or foundations; Ability to balance security rigor with developer usability and community sustainability; Experience producing vendor-neutral, ecosystem-wide guidance
  • Communication \& Influence: Excellent written and verbal communication skills; Ability to explain complex technical risks to non-technical audiences; Comfortable leading through influence in a collaborative, multi-stakeholder environment

###### Additional Information

Salary $170,000 - $185,000 USD

All your information will be kept confidential according to EEO guidelines.

Looking for more opportunities?

Browse thousands of graduate jobs and entry-level positions.

Browse All Jobs