Nov 12
Public Trust
Early Career (2+ yrs experience)
Unspecified
IT - Software
Remote/Hybrid•Gaithersburg, MD (Off-Site/Hybrid)
Databricks Data Engineer
Location: Remote (Quarterly Travel to Gaithersburg, MD required.)
****Active Public Trust clearance required.
Job Description
We are seeking a Databricks Data Engineer to develop and support data pipelines and analytics environments within Azure cloud infrastructure. The engineer will translate business requirements into data solutions supporting an enterprise-scale Microsoft Azure-based data analytics platform. This includes maintaining ETL operations, developing new pipelines, ensuring data integrity, and enabling AI-driven analytics.
Responsibilities
• Design, build, and optimize scalable data solutions using Databricks and Medallion Architecture.
• Manage ingestion routines for multi-terabyte datasets across multiple Databricks workspaces.
• Integrate structured and unstructured data to enable high-quality business insights.
• Implement data management and governance strategies ensuring security and compliance.
• Support user requests, platform stability, and Spark performance tuning.
• Collaborate across teams to integrate with Azure Functions, Data Factory, Log Analytics, and more.
• Manage infrastructure using Infrastructure-as-Code (IaC) principles.
• Apply best practices for data security, governance, and federal compliance.
Qualifications
• BS in Computer Science or related field with 3+ years of experience, or MS with 2+ years.
• 3+ years of experience building ingestion flows for structured and unstructured data in the cloud.
• Databricks Data Engineer certification and 2+ years maintaining Databricks platform/Spark development.
• Strong skills in Python, Spark, and R. (.NET a plus)
• Experience with Azure services (Data Factory, Storage, Functions, Log Analytics).
• Familiarity with data governance, metadata management, and enterprise data catalogs.
• Experience with Agile methodology, CI/CD automation, and cloud-based development.
• U.S. Citizenship and active Public Trust clearance required.
Skills:
• Databricks
• Azure Data Factory
• Python
• Spark
• R
• Data Ingestion
• ETL Development
• Medallion Architecture
• Infrastructure as Code (IaC)
• Data Governance
• CI/CD
• Cloud Data Engineering
• Metadata Management
• Azure Functions
• Log Analytics
Location: Remote (Quarterly Travel to Gaithersburg, MD required.)
****Active Public Trust clearance required.
Job Description
We are seeking a Databricks Data Engineer to develop and support data pipelines and analytics environments within Azure cloud infrastructure. The engineer will translate business requirements into data solutions supporting an enterprise-scale Microsoft Azure-based data analytics platform. This includes maintaining ETL operations, developing new pipelines, ensuring data integrity, and enabling AI-driven analytics.
Responsibilities
• Design, build, and optimize scalable data solutions using Databricks and Medallion Architecture.
• Manage ingestion routines for multi-terabyte datasets across multiple Databricks workspaces.
• Integrate structured and unstructured data to enable high-quality business insights.
• Implement data management and governance strategies ensuring security and compliance.
• Support user requests, platform stability, and Spark performance tuning.
• Collaborate across teams to integrate with Azure Functions, Data Factory, Log Analytics, and more.
• Manage infrastructure using Infrastructure-as-Code (IaC) principles.
• Apply best practices for data security, governance, and federal compliance.
Qualifications
• BS in Computer Science or related field with 3+ years of experience, or MS with 2+ years.
• 3+ years of experience building ingestion flows for structured and unstructured data in the cloud.
• Databricks Data Engineer certification and 2+ years maintaining Databricks platform/Spark development.
• Strong skills in Python, Spark, and R. (.NET a plus)
• Experience with Azure services (Data Factory, Storage, Functions, Log Analytics).
• Familiarity with data governance, metadata management, and enterprise data catalogs.
• Experience with Agile methodology, CI/CD automation, and cloud-based development.
• U.S. Citizenship and active Public Trust clearance required.
Skills:
• Databricks
• Azure Data Factory
• Python
• Spark
• R
• Data Ingestion
• ETL Development
• Medallion Architecture
• Infrastructure as Code (IaC)
• Data Governance
• CI/CD
• Cloud Data Engineering
• Metadata Management
• Azure Functions
• Log Analytics
group id: 10529568