Posted today
Secret
Senior Level Career (10+ yrs experience)
$130,000 - $150,000
IT - Security
Colorado Springs, CO (On-Site/Office)•Huntsville, AL (On-Site/Office)
Description of Duties:
The Senior Elastic Stack Data Integration Engineer supports the Missile Defense Agency (MDA) on the Integrated Research and Development for Enterprise Solutions (IRES) contract. The candidate will:
• Serve as the primary technical authority for designing, building, and maintaining data ingestion pipelines supporting Elastic SIEM.
• Focus on creating scalable, resilient Logstash architectures
• Develop advanced pipeline logic
• Normalize, enrich, and transform security telemetry
• Ensure reliable delivery of high-fidelity data to Elasticsearch
Key Responsibilities:
• Architect, build, and maintain Logstash pipelines to ingest and transform logs from diverse systems, including network devices, servers, cloud services, and security platforms.
• Implement parsing, grok patterns, JSON transformations, conditional routing, enrichment logic, and ECS mapping.
• Optimize pipeline performance, resiliency, and scalability (e.g., persistent queues, pipeline workers, memory tuning, load balancing).
• Ensure all ingested data aligns to ECS (Elastic Common Schema) or internal schema requirements.
• Implement data enrichment workflows (GeoIP, threat intel lookups, metadata injection).
• Validate data completeness, integrity, and fidelity across ingestion flows.
• Maintain and optimize Logstash clusters, including version management, scaling, tuning, and high-availability configurations.
• Manage integrations with Beats, Elastic Agent, Kafka, syslog endpoints, and custom data collectors.
• Monitor ingestion throughput, latency, and error rates; implement proactive alerting and troubleshooting processes.
• Create and maintain technical documentation, including pipeline diagrams, data flow maps, runbooks, and schema references.
• Establish enterprise standards for parsing, enrichment, normalization, and ingestion patterns.
• Support internal and external audits by documenting data handling flows and pipeline logic.
• Work closely with SIEM integration engineers to align pipelines with customer environments and logging requirements.
• Partner with detection engineering teams to ensure data supports analytic coverage and rule development.
• Collaborate with infrastructure and platform operations for deployment, scaling, and reliability engineering.
The successful candidate will:
• Have a deep command of Logstash architecture, patterns, and performance optimization.
• Have a mastery of parsing, enrichment, normalization, and ECS alignment.
• Have a strong understanding of network protocols, logging patterns, and telemetry generation from enterprise systems.
• Have advanced troubleshooting skills across data ingestion, pipeline logic, and Elastic Stack processing layers.
• Be able to design scalable, HA ingestion workflows with clear operational boundaries.
• Be able to conduct data modeling, schema design, and transformation mapping.
• Be effective at interfacing with multiple teams, gathering requirements, and aligning pipeline designs with SIEM analytics needs.
• Be focused on reliability, maintainability, and observability across all pipeline components.
• Have strong attention to detail and a disciplined approach to documentation, versioning, and configuration management.
• Be able to work independently, drive pipeline architecture decisions, and mentor junior engineers.
• Have strong documentation, workflow diagramming, and communication skills.
Basic Requirements:
• Must have 10, or more, years of general (full-time) work experience
○ May be reduced with completion of advanced education
• Must have 5, or more, years of experience in log ingestion, data engineering, or SIEM pipeline development
• Must have 2, or more, years of experience working in a management or leadership role, mentoring and guiding other team members.
• Must have a strong background in Elastic Stack components (Elasticsearch, Kibana, Beats, Elastic Agent).
• Must have experience with data ingestion, processing, and enrichment techniques.
• Must have hands-on experience ingesting, processing, and normalizing diverse log types (Windows events, syslog, firewall logs, cloud telemetry, security tooling).
• Must be proficient with Linux administration, system-level debugging, and CLI-based operations.
• Must have a DoD 8570.01-M IAT Level II certification with Continuing Education (CE) - (CCNA-Security, CySA+, GICSP, GSEC, Security+ CE, CND, SSCP)
• Must have an active DoD Secret Security Clearance
• Must be able to obtain an active DoD Top Secret Security Clearance
Desired Requirements:
• Be an Elastic Certified Engineer or have relevant Elastic Stack certifications.
• Have strong experience integrating Kafka, Redis, or other message bus technologies into ingestion workflows.
• Be proficient with scripting in Python, Bash, or PowerShell for automation and data validation.
• Have experience designing geo-distributed or multi-cluster ingestion architectures.
• Have knowledge of threat intelligence ingestion, correlation data enrichment, and advanced ECS mapping.
• Have experience with CI/CD pipelines, GitOps workflows, or Infrastructure-as-Code (Terraform, Ansible).
• Be familiar with data quality assurance frameworks and pipeline testing methodologies.
• Have knowledge of cloud-native logging architectures (AWS Firehose, Azure Event Hub, GCP Logging).
The Senior Elastic Stack Data Integration Engineer supports the Missile Defense Agency (MDA) on the Integrated Research and Development for Enterprise Solutions (IRES) contract. The candidate will:
• Serve as the primary technical authority for designing, building, and maintaining data ingestion pipelines supporting Elastic SIEM.
• Focus on creating scalable, resilient Logstash architectures
• Develop advanced pipeline logic
• Normalize, enrich, and transform security telemetry
• Ensure reliable delivery of high-fidelity data to Elasticsearch
Key Responsibilities:
• Architect, build, and maintain Logstash pipelines to ingest and transform logs from diverse systems, including network devices, servers, cloud services, and security platforms.
• Implement parsing, grok patterns, JSON transformations, conditional routing, enrichment logic, and ECS mapping.
• Optimize pipeline performance, resiliency, and scalability (e.g., persistent queues, pipeline workers, memory tuning, load balancing).
• Ensure all ingested data aligns to ECS (Elastic Common Schema) or internal schema requirements.
• Implement data enrichment workflows (GeoIP, threat intel lookups, metadata injection).
• Validate data completeness, integrity, and fidelity across ingestion flows.
• Maintain and optimize Logstash clusters, including version management, scaling, tuning, and high-availability configurations.
• Manage integrations with Beats, Elastic Agent, Kafka, syslog endpoints, and custom data collectors.
• Monitor ingestion throughput, latency, and error rates; implement proactive alerting and troubleshooting processes.
• Create and maintain technical documentation, including pipeline diagrams, data flow maps, runbooks, and schema references.
• Establish enterprise standards for parsing, enrichment, normalization, and ingestion patterns.
• Support internal and external audits by documenting data handling flows and pipeline logic.
• Work closely with SIEM integration engineers to align pipelines with customer environments and logging requirements.
• Partner with detection engineering teams to ensure data supports analytic coverage and rule development.
• Collaborate with infrastructure and platform operations for deployment, scaling, and reliability engineering.
The successful candidate will:
• Have a deep command of Logstash architecture, patterns, and performance optimization.
• Have a mastery of parsing, enrichment, normalization, and ECS alignment.
• Have a strong understanding of network protocols, logging patterns, and telemetry generation from enterprise systems.
• Have advanced troubleshooting skills across data ingestion, pipeline logic, and Elastic Stack processing layers.
• Be able to design scalable, HA ingestion workflows with clear operational boundaries.
• Be able to conduct data modeling, schema design, and transformation mapping.
• Be effective at interfacing with multiple teams, gathering requirements, and aligning pipeline designs with SIEM analytics needs.
• Be focused on reliability, maintainability, and observability across all pipeline components.
• Have strong attention to detail and a disciplined approach to documentation, versioning, and configuration management.
• Be able to work independently, drive pipeline architecture decisions, and mentor junior engineers.
• Have strong documentation, workflow diagramming, and communication skills.
Basic Requirements:
• Must have 10, or more, years of general (full-time) work experience
○ May be reduced with completion of advanced education
• Must have 5, or more, years of experience in log ingestion, data engineering, or SIEM pipeline development
• Must have 2, or more, years of experience working in a management or leadership role, mentoring and guiding other team members.
• Must have a strong background in Elastic Stack components (Elasticsearch, Kibana, Beats, Elastic Agent).
• Must have experience with data ingestion, processing, and enrichment techniques.
• Must have hands-on experience ingesting, processing, and normalizing diverse log types (Windows events, syslog, firewall logs, cloud telemetry, security tooling).
• Must be proficient with Linux administration, system-level debugging, and CLI-based operations.
• Must have a DoD 8570.01-M IAT Level II certification with Continuing Education (CE) - (CCNA-Security, CySA+, GICSP, GSEC, Security+ CE, CND, SSCP)
• Must have an active DoD Secret Security Clearance
• Must be able to obtain an active DoD Top Secret Security Clearance
Desired Requirements:
• Be an Elastic Certified Engineer or have relevant Elastic Stack certifications.
• Have strong experience integrating Kafka, Redis, or other message bus technologies into ingestion workflows.
• Be proficient with scripting in Python, Bash, or PowerShell for automation and data validation.
• Have experience designing geo-distributed or multi-cluster ingestion architectures.
• Have knowledge of threat intelligence ingestion, correlation data enrichment, and advanced ECS mapping.
• Have experience with CI/CD pipelines, GitOps workflows, or Infrastructure-as-Code (Terraform, Ansible).
• Be familiar with data quality assurance frameworks and pipeline testing methodologies.
• Have knowledge of cloud-native logging architectures (AWS Firehose, Azure Event Hub, GCP Logging).
group id: 10217692