Big Data Engineer/Developer
Location: Remote
Required Clearance: Public Trust
Since 1999, ITEC has delivered mission-critical support to the DoD and Intelligence Community. Now part of ManpowerGroup Public Sector (MGPS), we continue that work with expanded capabilities. Employees hired through this process will join MGPS and receive a comprehensive benefits package and competitive pay.
U.S. Citizens.hip Mandatory: Due to our US federal government contract, candidates for this position are required to be a US Citizen and will be subject to a background investigation.
Job Description:
As a GCP Data Pipeline Developer & BigQuery Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines on Google Cloud Platform. Your role will involve working with BigQuery to analyze and optimize large datasets, as well as implementing ETL processes to ensure data quality and integrity. Proficiency in foundational GCP services, BigQuery, data engineering principles, SQL, and PySpark will be essential for success in this position. Additionally, experience with ETL tools such as Qlik Data Integration or FiveTran would be a valuable asset in this role.
In this role, you will have the opportunity to work on complex data projects that require a deep understanding of cloud-based data processing and analysis. Your expertise in GCP and BigQuery will be crucial in developing efficient data pipelines that meet the organization's data processing needs. Strong SQL skills will be necessary for querying and manipulating data within BigQuery, while experience with PySpark will enable you to perform advanced data transformations and analytics. Overall, your ability to leverage these tools and technologies effectively will be key to driving data-driven decision-making within the organization.
As a GCP Data Pipeline Developer & BigQuery Data Engineer, you will play a critical role in shaping the organization's data infrastructure and analytics capabilities. Your experience in data engineering and cloud-based data processing will be instrumental in driving the success of data projects and initiatives. By utilizing your skills in GCP, BigQuery, SQL, and PySpark, you will be able to design and implement scalable data solutions that support the organization's data-driven goals. Additionally, your familiarity with ETL tools will enable you to streamline data integration processes and enhance overall data quality.
Job Responsibilities:
- Develop and maintain GCP data pipelines
- Design and optimize BigQuery data structures
- Utilize SQL and PySpark for data processing
- Collaborate with team members on data engineering projects
- Implement ETL processes using tools like GCP Dataflow, Qlik
- Data Integration, or FiveTran