Today
Secret
Early Career (2+ yrs experience)
$150,000 - $170,000
IT - Data Science
Washington, DC (On-Site/Office)•Alexandria, VA (On-Site/Office)
Titania Solutions Group is seeking a skilled Level III Data Engineer with strong communication skills. The successful candidate will play a critical role in designing, building, and maintaining robust data pipelines, ensuring near real-time data flow, and enabling insightful business intelligence solutions. This is a unique opportunity to work on cutting-edge data architecture, streaming technologies, and advanced analytics solutions that drive key business decisions. This role will be remotely working Eastern Standard Time core hours.
Responsibilities:
Design, develop, document and maintain scalable data pipelines using Confluent for real-time data streaming.
Build and optimize data models and dashboards in Qlik to support business intelligence and reporting needs.
Integrate data from various sources, ensuring consistency, accuracy, and timeliness.
Collaborate with cross-functional teams, including Data Scientists, Analysts, and Software Engineers, to define data requirements and deliver impactful data solutions.
Implement best practices for data ingestion, processing, and storage, focusing on performance and reliability.
Monitor and troubleshoot data pipelines, ensuring data quality and operational excellence.
Stay current with emerging technologies and industry trends in data engineering, streaming platforms, and BI tools.
Requirements:
Bachelor’s degree in Computer Science, Data Science, or a related field or equivalent years’ experience.
3+ years of professional experience as a Data Engineer, with a focus on streaming data solutions and BI tools.
Clearance Level – DoD Secret or DHS or any agency Public Trust equivalent
Strong understanding of data warehousing concepts, ETL/ELT processes, and real-time data architectures.
Excellent problem-solving skills, attention to detail, and ability to work independently or as part of a team.
Effective communication skills with the ability to explain technical concepts to non-technical stakeholders.
Proven experience with Confluent Kafka (Confluent Platform, Kafka Streams, kSQL, Schema Registry, etc.).
Proficiency in Qlik Sense (data modeling, scripting, and dashboard creation).
Strong programming skills in Python, Java, or Scala for data processing.
Expertise in SQL and experience with relational and NoSQL databases.
Familiarity with cloud platforms (e.g., AWS or Azure) and related services for data engineering.
Experience with data orchestration tools like Apache Airflow or similar is a plus
Certifications in Confluent Kafka and/or Qlik a plus
Experience with containerization and orchestration (e.g., Docker, Kubernetes) a plus
Knowledge of data governance, security, and compliance best practices a plus
Experience with CI/CD pipelines and DevOps practices in data engineering a plus
Responsibilities:
Design, develop, document and maintain scalable data pipelines using Confluent for real-time data streaming.
Build and optimize data models and dashboards in Qlik to support business intelligence and reporting needs.
Integrate data from various sources, ensuring consistency, accuracy, and timeliness.
Collaborate with cross-functional teams, including Data Scientists, Analysts, and Software Engineers, to define data requirements and deliver impactful data solutions.
Implement best practices for data ingestion, processing, and storage, focusing on performance and reliability.
Monitor and troubleshoot data pipelines, ensuring data quality and operational excellence.
Stay current with emerging technologies and industry trends in data engineering, streaming platforms, and BI tools.
Requirements:
Bachelor’s degree in Computer Science, Data Science, or a related field or equivalent years’ experience.
3+ years of professional experience as a Data Engineer, with a focus on streaming data solutions and BI tools.
Clearance Level – DoD Secret or DHS or any agency Public Trust equivalent
Strong understanding of data warehousing concepts, ETL/ELT processes, and real-time data architectures.
Excellent problem-solving skills, attention to detail, and ability to work independently or as part of a team.
Effective communication skills with the ability to explain technical concepts to non-technical stakeholders.
Proven experience with Confluent Kafka (Confluent Platform, Kafka Streams, kSQL, Schema Registry, etc.).
Proficiency in Qlik Sense (data modeling, scripting, and dashboard creation).
Strong programming skills in Python, Java, or Scala for data processing.
Expertise in SQL and experience with relational and NoSQL databases.
Familiarity with cloud platforms (e.g., AWS or Azure) and related services for data engineering.
Experience with data orchestration tools like Apache Airflow or similar is a plus
Certifications in Confluent Kafka and/or Qlik a plus
Experience with containerization and orchestration (e.g., Docker, Kubernetes) a plus
Knowledge of data governance, security, and compliance best practices a plus
Experience with CI/CD pipelines and DevOps practices in data engineering a plus
group id: 10492954