Senior Data Engineer P-133

Remote
Full Time
Experienced

SMASH, Who we are?

We believe in long-lasting relationships with our talent. We invest time getting to know them and understanding what they seek as their professional next step.

We aim to find the perfect match. As agents, we pair our talent with our US clients, not only by their technical skills but as a cultural fit. Our core competency is to find the right talent fast.

This position is remote within the United States. You must have U.S. citizenship or a valid U.S. work permit to apply for this role.

Role summary
You will design and deliver scalable, GCP-native data solutions that power machine learning and analytics initiatives. This role focuses on building high-quality, domain-driven data products and decentralized data infrastructure that enable rapid iteration, measurable outcomes, and long-term value creation.

Responsibilities

  • Design and implement a scalable, GCP-native data strategy aligned with machine learning and analytics initiatives.

  • Build, operate, and evolve reusable data products that deliver compounding business value.

  • Architect and govern squad-owned data storage strategies using BigQuery, AlloyDB, ODS, and transactional systems.

  • Develop high-performance data transformations and analytical workflows using Python and SQL.

  • Lead ingestion and streaming strategies using Pub/Sub, Datastream (CDC), and Cloud Dataflow (Apache Beam).

  • Orchestrate data workflows using Cloud Composer (Airflow) and manage transformations with Dataform.

  • Modernize legacy data assets and decouple procedural logic from operational databases into analytical platforms.

  • Apply Dataplex capabilities to enforce data governance, quality, lineage, and discoverability.

  • Collaborate closely with engineering, product, and data science teams in an iterative, squad-based environment.

  • Drive technical decision-making, resolve ambiguity, and influence data architecture direction.

  • Ensure data solutions are secure, scalable, observable, and aligned with best practices.

Requirements – Must-haves

  • 8+ years of professional experience in data engineering or a related discipline.

  • Expert-level proficiency in Python and SQL for scalable data transformation and analysis.

  • Deep expertise with Google Cloud Platform data services, especially BigQuery.

  • Hands-on experience with AlloyDB (PostgreSQL) and Cloud SQL (PostgreSQL).

  • Strong understanding of domain-driven data design and data product thinking.

  • Proven experience architecting ingestion pipelines using Pub/Sub and Datastream (CDC).

  • Expertise with Dataform, Cloud Composer (Airflow), and Cloud Dataflow (Apache Beam).

  • Experience modernizing legacy data systems and optimizing complex SQL/procedural logic.

  • Ability to work independently and lead initiatives with minimal guidance.

  • Strong critical thinking, problem-solving, and decision-making skills.

Nice-to-haves (optional)

  • Experience applying Dataplex for data governance and quality management.

  • Exposure to proprietary SQL dialects (T-SQL, PL/pgSQL).

  • Experience supporting machine learning or advanced analytics workloads.

  • Background working in decentralized, squad-based or product-oriented data teams.

  • Experience influencing technical direction across multiple teams or domains.

Share

Apply for this position

Required*
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*