Job Requirements
University of Maryland, MD
Secret Polygraph not specified
Career Level not specified
$60,000 - $80,000
Job Description
Organization's Summary Statement:
The Applied Research Laboratory for Intelligence & Security (ARLIS) at the University of Maryland is a University-Affiliated Research Center (UARC) dedicated to advancing research, innovation, and technology transition to improve decision making for U.S. national security. ARLIS combines deep scientific expertise with operational insight to address challenges in intelligence analysis, cybersecurity, artificial intelligence / machine learning, quantum science, and human-machine teaming. Researchers, scientists, engineers, and analysts at ARLIS collaborate with government agencies, industry partners, and academic institutions to deliver actionable insights and transformative solutions through research and development. Employees at ARLIS work on projects of critical importance, contribute directly to the nation’s security, and are supported by a culture that values integrity, collaboration, and professional growth.
The Applied Research Laboratory for Intelligence and Security (ARLIS) at the University of Maryland is seeking a Postdoctoral Associate in AI Security to conduct cutting-edge research at the intersection of machine learning, cybersecurity, and national security.
This position focuses on advancing the science and practice of securing advanced AI systems against sophisticated adversaries, such as large language models (LLMs), reasoning systems, and agentic architectures. The role operates within a mission-driven R&D environment supporting government and Intelligence Community (IC) partners, where the threat model assumes highly capable actors with deep technical access to deployed systems. Opportunities include basic and open research, publishing in top-tier venues, as well as transitioning capabilities into operational use. The successful candidate will contribute to frontier research spanning adversarial machine learning, secure AI deployment, and other approaches to security and safety, such as mechanistic interpretability.
Key Responsibilities
• Conduct original research in AI security, including adversarial machine learning, model robustness, and secure AI system design.
• Develop and evaluate novel attack and defense techniques for modern AI systems, including:
• Mechanistic and white-box analysis of model behavior and safety mechanisms
• Multi-turn and adaptive adversarial interactions with AI systems
• Security of reasoning models and agent-based architectures
• Design and implement experimental frameworks for evaluating AI system vulnerabilities across deployment scenarios (e.g., open-weight, API-based, and hybrid systems).
• Apply interpretability techniques (e.g., circuit analysis, feature attribution, sparse autoencoders) to understand internal model behavior and failure modes.
• Contribute to the development of benchmarks, evaluation methodologies, and datasets for AI security research.
• Collaborate with interdisciplinary teams including machine learning researchers, systems engineers, and national security domain experts.
• Translate research findings into actionable insights for government sponsors, including technical reports and briefings.
• Publish research in leading conferences and journals (e.g., NeurIPS, ICML, ICLR, IEEE S&P, CCS), consistent with program objectives.
Must be able to obtain a U.S. security clearance. If selected, you must meet the requirements for access to classified information and will be subject to a government security clearance investigation that includes criminal and credit history checks, as well as verification of U.S. citizenship, birth, education, employment, and military history.
Final offer is contingent upon the candidate’s ability to successfully obtain the necessary interim Secret security clearance, as determined by the U.S. Government, prior to commencing employment.
Research Areas of Interest
Candidates may contribute to one or more of the following focus areas:
Adversarial AI & Red Teaming
• Adaptive, multi-turn attacks and reasoning-based adversarial strategies
• Evaluation of model robustness under realistic threat models
Secure AI Systems & Deployment
• Security of agentic systems, tool use, and multi-model architectures
• Supply chain and fine-tuning risks in open-weight models
AI Evaluation & Benchmarking
• Development of security-focused benchmarks and evaluation pipelines
• Measurement of robustness, safety degradation, and attack transferability
Mechanistic AI Security
• Circuit-level analysis of safety and capability mechanisms
• Feature geometry, representation learning, and interpretability-driven security
Work Environment & Impact
• Engage in high-impact research directly supporting national security missions.
• Work alongside leading experts in AI, cybersecurity, and intelligence applications.
• Access to advanced computing infrastructure and unique government-relevant problem sets.
• Opportunity to shape emerging standards and practices for securing advanced AI systems.
• Balance of publishable academic research and mission-driven applied work.
Why This Role:
AI systems are rapidly becoming foundational to national security operations. At the same time, their attack surface is evolving toward more sophisticated threat models, including adversaries with deep technical access and the ability to exploit internal model behavior.
This position offers a unique opportunity to define how next-generation AI systems are secured, combining foundational research with real-world mission impact.
Physical Demands:
Sedentary work performed in a normal office environment; exerts up to 10 pounds of force occasionally and/or negligible amount of force frequently or constantly to lift, carry, push, pull or otherwise move objects, including the human body. Ability to attend meetings both on and off campus. Spending long hours in front of a computer screen.
Minimum Qualifications
• Ph.D. in Computer Science, Machine Learning, Cybersecurity, or a related technical field.
• Demonstrated research experience in one or more of the following areas:
• Machine learning (deep learning, LLMs, reinforcement learning)
• Adversarial machine learning or AI safety/security
• Systems security, applied cryptography, or cyber operations
• Strong programming skills in Python and experience with ML frameworks (e.g., PyTorch, TensorFlow).
• Experience designing and executing empirical research, including experimentation and evaluation.
• Ability to work in a collaborative, interdisciplinary research environment.
• Ability to obtain and maintain a U.S. security clearance.
Preferences:
• Familiarity with white-box threat models and evaluation of open-weight AI systems.
• Experience with MLOps or large-scale training infrastructure, including distributed training, GPU clusters, or ML experimentation platforms.
• Knowledge of AI system deployment architectures, including RAG systems, multi-agent systems, or tool-augmented models.
• Experience with adversarial evaluation frameworks, red-teaming methodologies, or benchmark development.
• Experience with mechanistic interpretability and/or alternative approaches to understanding model internals (e.g., activation analysis, circuit-level reasoning, representation learning).
• Background in national security applications, including work with DoD, IC, or federally funded research programs.
• Record of publications in top-tier conferences or journals.
Licenses/ Certifications: N/A
The Applied Research Laboratory for Intelligence & Security (ARLIS) at the University of Maryland is a University-Affiliated Research Center (UARC) dedicated to advancing research, innovation, and technology transition to improve decision making for U.S. national security. ARLIS combines deep scientific expertise with operational insight to address challenges in intelligence analysis, cybersecurity, artificial intelligence / machine learning, quantum science, and human-machine teaming. Researchers, scientists, engineers, and analysts at ARLIS collaborate with government agencies, industry partners, and academic institutions to deliver actionable insights and transformative solutions through research and development. Employees at ARLIS work on projects of critical importance, contribute directly to the nation’s security, and are supported by a culture that values integrity, collaboration, and professional growth.
The Applied Research Laboratory for Intelligence and Security (ARLIS) at the University of Maryland is seeking a Postdoctoral Associate in AI Security to conduct cutting-edge research at the intersection of machine learning, cybersecurity, and national security.
This position focuses on advancing the science and practice of securing advanced AI systems against sophisticated adversaries, such as large language models (LLMs), reasoning systems, and agentic architectures. The role operates within a mission-driven R&D environment supporting government and Intelligence Community (IC) partners, where the threat model assumes highly capable actors with deep technical access to deployed systems. Opportunities include basic and open research, publishing in top-tier venues, as well as transitioning capabilities into operational use. The successful candidate will contribute to frontier research spanning adversarial machine learning, secure AI deployment, and other approaches to security and safety, such as mechanistic interpretability.
Key Responsibilities
• Conduct original research in AI security, including adversarial machine learning, model robustness, and secure AI system design.
• Develop and evaluate novel attack and defense techniques for modern AI systems, including:
• Mechanistic and white-box analysis of model behavior and safety mechanisms
• Multi-turn and adaptive adversarial interactions with AI systems
• Security of reasoning models and agent-based architectures
• Design and implement experimental frameworks for evaluating AI system vulnerabilities across deployment scenarios (e.g., open-weight, API-based, and hybrid systems).
• Apply interpretability techniques (e.g., circuit analysis, feature attribution, sparse autoencoders) to understand internal model behavior and failure modes.
• Contribute to the development of benchmarks, evaluation methodologies, and datasets for AI security research.
• Collaborate with interdisciplinary teams including machine learning researchers, systems engineers, and national security domain experts.
• Translate research findings into actionable insights for government sponsors, including technical reports and briefings.
• Publish research in leading conferences and journals (e.g., NeurIPS, ICML, ICLR, IEEE S&P, CCS), consistent with program objectives.
Must be able to obtain a U.S. security clearance. If selected, you must meet the requirements for access to classified information and will be subject to a government security clearance investigation that includes criminal and credit history checks, as well as verification of U.S. citizenship, birth, education, employment, and military history.
Final offer is contingent upon the candidate’s ability to successfully obtain the necessary interim Secret security clearance, as determined by the U.S. Government, prior to commencing employment.
Research Areas of Interest
Candidates may contribute to one or more of the following focus areas:
Adversarial AI & Red Teaming
• Adaptive, multi-turn attacks and reasoning-based adversarial strategies
• Evaluation of model robustness under realistic threat models
Secure AI Systems & Deployment
• Security of agentic systems, tool use, and multi-model architectures
• Supply chain and fine-tuning risks in open-weight models
AI Evaluation & Benchmarking
• Development of security-focused benchmarks and evaluation pipelines
• Measurement of robustness, safety degradation, and attack transferability
Mechanistic AI Security
• Circuit-level analysis of safety and capability mechanisms
• Feature geometry, representation learning, and interpretability-driven security
Work Environment & Impact
• Engage in high-impact research directly supporting national security missions.
• Work alongside leading experts in AI, cybersecurity, and intelligence applications.
• Access to advanced computing infrastructure and unique government-relevant problem sets.
• Opportunity to shape emerging standards and practices for securing advanced AI systems.
• Balance of publishable academic research and mission-driven applied work.
Why This Role:
AI systems are rapidly becoming foundational to national security operations. At the same time, their attack surface is evolving toward more sophisticated threat models, including adversaries with deep technical access and the ability to exploit internal model behavior.
This position offers a unique opportunity to define how next-generation AI systems are secured, combining foundational research with real-world mission impact.
Physical Demands:
Sedentary work performed in a normal office environment; exerts up to 10 pounds of force occasionally and/or negligible amount of force frequently or constantly to lift, carry, push, pull or otherwise move objects, including the human body. Ability to attend meetings both on and off campus. Spending long hours in front of a computer screen.
Minimum Qualifications
• Ph.D. in Computer Science, Machine Learning, Cybersecurity, or a related technical field.
• Demonstrated research experience in one or more of the following areas:
• Machine learning (deep learning, LLMs, reinforcement learning)
• Adversarial machine learning or AI safety/security
• Systems security, applied cryptography, or cyber operations
• Strong programming skills in Python and experience with ML frameworks (e.g., PyTorch, TensorFlow).
• Experience designing and executing empirical research, including experimentation and evaluation.
• Ability to work in a collaborative, interdisciplinary research environment.
• Ability to obtain and maintain a U.S. security clearance.
Preferences:
• Familiarity with white-box threat models and evaluation of open-weight AI systems.
• Experience with MLOps or large-scale training infrastructure, including distributed training, GPU clusters, or ML experimentation platforms.
• Knowledge of AI system deployment architectures, including RAG systems, multi-agent systems, or tool-augmented models.
• Experience with adversarial evaluation frameworks, red-teaming methodologies, or benchmark development.
• Experience with mechanistic interpretability and/or alternative approaches to understanding model internals (e.g., activation analysis, circuit-level reasoning, representation learning).
• Background in national security applications, including work with DoD, IC, or federally funded research programs.
• Record of publications in top-tier conferences or journals.
Licenses/ Certifications: N/A
group id: 91122244