Job Requirements
McLean, VA
Intel Agency (NSA, CIA, FBI, etc) Full Scope Polygraph
Mid Level Career (5+ yrs experience)
Salary not specified
Join Premium to unlock estimated salaries
Job Description
We are seeking a Software Developer to design, build, and maintain scalable data-driven applications, analytics systems, and production pipelines. This role combines software engineering, data modeling, and applied analytics to deliver solutions that improve operational efficiency and enable data-informed decision-making.
The ideal candidate is strong in Python development, experienced in data systems and machine learning workflows, and comfortable translating business requirements into scalable technical solutions.
Duties & Responsibilities:
Participate in data modeling and analytics engineering teams to design scalable data-driven solutions
Translate business needs into technical requirements, system designs, and implementation plans
Design, build, and optimize data pipelines and analytics systems supporting production workloads
Develop and maintain software solutions using Python (object-oriented programming)
Build and maintain CI/CD pipelines using Git/GitHub and Jenkins
Develop and optimize data extraction, transformation, parsing, and storage workflows
Design and implement logical data architectures supporting SQL and NoSQL systems
Clean, structure, and prepare datasets for use in machine learning and statistical analysis
Apply mathematical, statistical, and analytical methods to solve complex data problems
Develop and test models using structured experimentation and the scientific method (hypothesis → test → evaluate)
Tie model and system testing to quantitative performance metrics and KPIs
Develop custom database scripts, queries, and data interfaces
Build and maintain APIs and data services supporting internal and external applications
Implement automated testing, configuration management, and deployment workflows
Support cloud-based systems and deployments using AWS or Azure environments
Work with distributed computing systems and optimize performance using parallel processing techniques
Develop automation tools, including CLI-based utilities for non-technical users
Use issue tracking and collaboration tools such as Jira and Confluence
Requirements:
Strong experience programming in Python (object-oriented development required)
Experience using data science and analytics libraries (Pandas, Scikit-learn)
Experience designing and building data pipelines incorporating SQL and NoSQL systems
Experience participating in data modeling and analytics project teams
Experience translating business requirements into technical solutions
Experience building and maintaining CI/CD pipelines using Git/GitHub and Jenkins
Experience performing data extraction, transformation, and large-scale data processing
Experience applying statistical, mathematical, and analytical techniques to real-world problems
Experience working with distributed systems or distributed computation environments
Experience with automated testing and software deployment workflows
Experience working with APIs and system integration
Experience using Jira and version control systems in collaborative environments
Experience with cloud platforms such as AWS or Azure
Experience building reproducible, tested, and maintainable software systems
Preferred Qualifications:
Experience applying machine learning techniques (supervised and unsupervised learning)
Experience working in Agile development environments
Experience developing new analytic or data-driven systems from the ground up
Experience handling or working with encrypted data or secure systems
Experience writing or reviewing technical documentation and program artifacts
Experience developing geospatial or geographic analysis models
Experience building systems involving performance measurement and operational metrics
Experience automating data preparation and feature engineering workflows
Experience creating scalable analytics systems for production environments
What we offer:
Flexible time off
Full medical coverage
401(k) with company match
Referral bonuses
Performance bonuses
Life insurance and disability coverage
Tuition and training reimbursement
The ideal candidate is strong in Python development, experienced in data systems and machine learning workflows, and comfortable translating business requirements into scalable technical solutions.
Duties & Responsibilities:
Participate in data modeling and analytics engineering teams to design scalable data-driven solutions
Translate business needs into technical requirements, system designs, and implementation plans
Design, build, and optimize data pipelines and analytics systems supporting production workloads
Develop and maintain software solutions using Python (object-oriented programming)
Build and maintain CI/CD pipelines using Git/GitHub and Jenkins
Develop and optimize data extraction, transformation, parsing, and storage workflows
Design and implement logical data architectures supporting SQL and NoSQL systems
Clean, structure, and prepare datasets for use in machine learning and statistical analysis
Apply mathematical, statistical, and analytical methods to solve complex data problems
Develop and test models using structured experimentation and the scientific method (hypothesis → test → evaluate)
Tie model and system testing to quantitative performance metrics and KPIs
Develop custom database scripts, queries, and data interfaces
Build and maintain APIs and data services supporting internal and external applications
Implement automated testing, configuration management, and deployment workflows
Support cloud-based systems and deployments using AWS or Azure environments
Work with distributed computing systems and optimize performance using parallel processing techniques
Develop automation tools, including CLI-based utilities for non-technical users
Use issue tracking and collaboration tools such as Jira and Confluence
Requirements:
Strong experience programming in Python (object-oriented development required)
Experience using data science and analytics libraries (Pandas, Scikit-learn)
Experience designing and building data pipelines incorporating SQL and NoSQL systems
Experience participating in data modeling and analytics project teams
Experience translating business requirements into technical solutions
Experience building and maintaining CI/CD pipelines using Git/GitHub and Jenkins
Experience performing data extraction, transformation, and large-scale data processing
Experience applying statistical, mathematical, and analytical techniques to real-world problems
Experience working with distributed systems or distributed computation environments
Experience with automated testing and software deployment workflows
Experience working with APIs and system integration
Experience using Jira and version control systems in collaborative environments
Experience with cloud platforms such as AWS or Azure
Experience building reproducible, tested, and maintainable software systems
Preferred Qualifications:
Experience applying machine learning techniques (supervised and unsupervised learning)
Experience working in Agile development environments
Experience developing new analytic or data-driven systems from the ground up
Experience handling or working with encrypted data or secure systems
Experience writing or reviewing technical documentation and program artifacts
Experience developing geospatial or geographic analysis models
Experience building systems involving performance measurement and operational metrics
Experience automating data preparation and feature engineering workflows
Experience creating scalable analytics systems for production environments
What we offer:
Flexible time off
Full medical coverage
401(k) with company match
Referral bonuses
Performance bonuses
Life insurance and disability coverage
Tuition and training reimbursement
group id: 91165268