Senior Data Operations (DataOps) Engineer P-134

Remote
Full Time
Experienced

SMASH, Who we are?

We believe in long-lasting relationships with our talent. We invest time getting to know them and understanding what they seek as their professional next step.

We aim to find the perfect match. As agents, we pair our talent with our US clients, not only by their technical skills but as a cultural fit. Our core competency is to find the right talent fast.

This position is remote within the United States. You must have U.S. citizenship or a valid U.S. work permit to apply for this role.

Role summary
You will lead the evolution of DataOps practices at a global scale, designing highly automated, resilient, and scalable data platforms. This role focuses on building self-service, microservices-based data infrastructure on GCP, enabling rapid deployment, strong data reliability, and continuous delivery through advanced automation and observability.

Responsibilities

  • Lead the design and implementation of enterprise-scale DataOps platforms and automation frameworks.

  • Architect and evolve GCP-native data platforms supporting high-throughput batch and real-time workloads.

  • Design and implement microservices-based data architectures using containerization technologies.

  • Build and maintain CI/CD pipelines for data workflows, including automated testing and deployment.

  • Develop Infrastructure as Code (IaC) solutions to standardize and automate platform provisioning.

  • Implement robust data orchestration, monitoring, and observability capabilities.

  • Establish and enforce data quality frameworks to ensure reliability and trust in data products.

  • Support real-time data platforms operating at extreme scale.

  • Partner with platform squads to deliver self-service data infrastructure products.

  • Drive best practices for automation, resiliency, scalability, and operational excellence.

  • Influence technical direction, mentor senior engineers, and lead through ambiguity.

Requirements – Must-haves

  • 8+ years of progressive experience in DataOps, Data Engineering, or Platform Engineering roles.

  • Strong expertise in data warehousing, data lakes, and distributed processing technologies (Spark, Hadoop, Kafka).

  • Advanced proficiency in SQL and Python; working knowledge of Java or Scala.

  • Deep experience with Google Cloud Platform (GCP) data and infrastructure services.

  • Expert understanding of microservices architecture and containerization (Docker, Kubernetes).

  • Proven hands-on experience with Infrastructure as Code tools (Terraform preferred).

  • Strong background in CI/CD methodologies applied to data pipelines.

  • Experience designing and implementing data automation frameworks.

  • Advanced knowledge of data orchestration, monitoring, and observability tooling.

  • Ability to architect highly scalable, resilient, and fault-tolerant data systems.

  • Strong problem-solving skills and ability to operate independently in ambiguous environments.

Nice-to-haves (optional)

  • Experience with real-time streaming systems at very large scale.

  • Exposure to AWS or Azure data platforms (in addition to GCP).

  • Experience with data quality tooling and governance frameworks.

  • Background building internal developer platforms or self-service infrastructure.

  • Experience influencing technical strategy across multiple teams or domains.

Share

Apply for this position

Required*
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*