Job Requirements
Annapolis Junction, MD
Top Secret/SCI Full Scope Polygraph
Senior Level Career (10+ yrs experience)
Salary not specified
Join Premium to unlock estimated salaries
Job Description
Candidates must already possess an active Top Secret/SCI w/ Full Scope Polygraph to be considered.
Summary:
• Develop software to extract meaning and value from structured and unstructured data.
• Leverage Machine Learning/Artificial Intelligence and statistical methods.
• Analyze and develop requirements for data characterization and ingestion.
• Work with database systems, Java, Python, and NiFi.
• Utilize scripting languages like Python and Scala for analytic development.
• Employ big data processing frameworks such as Pig, Map Reduce, Apache Spark, and Hadoop.
• Apply software engineering, software development, and systems engineering skills.
• Perform software testing and work with data flows and network metadata processing.
Qualifications & Compensation:
• Degree: Technical bachelor's degree or equivalent experience
• Years of experience: 20+ years
• Total Compensation: $325k+ yearly
Job Description:
• Devise strategies to extract meaning and value from structured and unstructured data.
• Leverage statistical methods and/or machine learning to discover patterns and behaviors of entities.
• Use query and visualization tools to present question focused datasets in a story-like manner.
• Work with various domains of metadata to develop new methodologies and techniques for automated data characterization.
• Analyze and develop requirements to support the characterization and ingestion of new and existing data types.
• Collaborate with teams to understand direct mission needs and requirements.
• Knowledgeable with data enrichment/conversion methods, familiar with data ontologies/schemas, and strong knowledge of structured data types (XML, CSV, JSON).
• Possess database experience.
• JAVA/Python, analytic development experience, and working knowledge of NIFI are highly desired.
• Minimum of 6 recent years of development experience.
• Recommend new technologies and processes for complex software projects.
• Serve as the technical lead of multiple software development teams.
• Ensure quality control of all developed and modified software.
• Delegate programming and testing responsibilities to one or more teams and monitor their performance.
• Develop simple data queries for existing or proposed databases or data repositories.
• Analytic development experience using scripting languages such as Python and Scala to use statistical libraries against data.
• Skilled with big data processing frameworks such as Pig, MapReduce and Spark to scale algorithms over large volumes of data.
• Experience employing a combination of analysis, computer science, mathematics, and software engineering skills to devise strategies for extracting meaning and value from large datasets.
• Experience with predictive analytics, machine learning, and data mining.
• Skilled with data flows, Pig scripting, Hadoop MapReduce, and various analytic tools.
• Knowledge of data indexing and analytic development.
• Experience working with cloud service providers and data stewards.
• Demonstrated experience in network metadata processing, manipulation, and analysis.
• Active TS/SCI clearance with polygraph is required.
About SYSTOLIC:
SYSTOLIC is dedicated to giving our employees the best possible company experience so that they can focus on providing outstanding support to their customer’s mission. Our company is founded on integrity, enthusiasm, and a relentless commitment to supporting the Intelligence Community. You can learn more about us and submit an application to be considered against our current and future openings at https://systolic.com.
To learn about our compensation ranges, visit our Pay Transparency page at: https://systolic.com/pay-transparency
Summary:
• Develop software to extract meaning and value from structured and unstructured data.
• Leverage Machine Learning/Artificial Intelligence and statistical methods.
• Analyze and develop requirements for data characterization and ingestion.
• Work with database systems, Java, Python, and NiFi.
• Utilize scripting languages like Python and Scala for analytic development.
• Employ big data processing frameworks such as Pig, Map Reduce, Apache Spark, and Hadoop.
• Apply software engineering, software development, and systems engineering skills.
• Perform software testing and work with data flows and network metadata processing.
Qualifications & Compensation:
• Degree: Technical bachelor's degree or equivalent experience
• Years of experience: 20+ years
• Total Compensation: $325k+ yearly
Job Description:
• Devise strategies to extract meaning and value from structured and unstructured data.
• Leverage statistical methods and/or machine learning to discover patterns and behaviors of entities.
• Use query and visualization tools to present question focused datasets in a story-like manner.
• Work with various domains of metadata to develop new methodologies and techniques for automated data characterization.
• Analyze and develop requirements to support the characterization and ingestion of new and existing data types.
• Collaborate with teams to understand direct mission needs and requirements.
• Knowledgeable with data enrichment/conversion methods, familiar with data ontologies/schemas, and strong knowledge of structured data types (XML, CSV, JSON).
• Possess database experience.
• JAVA/Python, analytic development experience, and working knowledge of NIFI are highly desired.
• Minimum of 6 recent years of development experience.
• Recommend new technologies and processes for complex software projects.
• Serve as the technical lead of multiple software development teams.
• Ensure quality control of all developed and modified software.
• Delegate programming and testing responsibilities to one or more teams and monitor their performance.
• Develop simple data queries for existing or proposed databases or data repositories.
• Analytic development experience using scripting languages such as Python and Scala to use statistical libraries against data.
• Skilled with big data processing frameworks such as Pig, MapReduce and Spark to scale algorithms over large volumes of data.
• Experience employing a combination of analysis, computer science, mathematics, and software engineering skills to devise strategies for extracting meaning and value from large datasets.
• Experience with predictive analytics, machine learning, and data mining.
• Skilled with data flows, Pig scripting, Hadoop MapReduce, and various analytic tools.
• Knowledge of data indexing and analytic development.
• Experience working with cloud service providers and data stewards.
• Demonstrated experience in network metadata processing, manipulation, and analysis.
• Active TS/SCI clearance with polygraph is required.
About SYSTOLIC:
SYSTOLIC is dedicated to giving our employees the best possible company experience so that they can focus on providing outstanding support to their customer’s mission. Our company is founded on integrity, enthusiasm, and a relentless commitment to supporting the Intelligence Community. You can learn more about us and submit an application to be considered against our current and future openings at https://systolic.com.
To learn about our compensation ranges, visit our Pay Transparency page at: https://systolic.com/pay-transparency
group id: 10527119