Make an impact
of your own.

Ssr Data Engineer

nocnoc

nocnoc

Data Science
Montevideo Department, Uruguay
Posted on Feb 8, 2026

The company

nocnoc is the leading e-commerce facilitator for global brands and retailers looking to increase their sales in Latin America.

We enable sellers around the world to easily access +15 marketplaces through one single platform offering their products to over 500 million online customers. We are committed to connecting Latin America with the world through e-commerce.

The opportunity

We are looking for a Ssr Data Engineer to join our Data team and play a key role in the evolution, scalability, and reliability of nocnoc’s data platform.

This role is designed for someone with strong hands-on experience in modern data engineering, who enjoys building robust data pipelines, defining best practices, and taking ownership of critical data products. You will work closely with Analytics, Product, and Business teams, contributing not only through execution but also through technical leadership and decision-making.

As a Ssr Data Engineer, you will help shape our Lakehouse architecture, ensure data quality and governance, and mentor other engineers while delivering high-impact solutions for the business.

Key Responsibilities:

  • Design, build, and maintain data pipelines across Bronze, Silver, and Gold layers, following established standards and best practices
  • Implement batch and near-real-time data pipelines using Python, SQL, Spark, and Delta Lake.
  • Develop and maintain workflows orchestrated with Apache Airflow, with a focus on reliability and maintainability.
  • Implement data ingestion patterns such as incremental loads, CDC, and full refreshes under guidance.
  • Collaborate in the implementation and improvement of the Lakehouse architecture on AWS (S3, Glue, Athena, Redshift legacy).
  • Support data quality, consistency, and freshness by implementing validations and monitoring (e.g., Great Expectations or similar).
  • Contribute to data documentation, catalogs, and basic lineage as part of governance initiatives.
  • Assist in optimizing storage and query performance for analytical workloads.
  • Work closely with Analytics, Product, and Business teams to understand requirements and implement data solutions
  • Participate in code reviews, technical discussions, and continuous improvement initiatives.

What experience will help you in this role?

  • 2-3+ years of experience in Data Engineering or similar roles.
  • Good proficiency in Python and SQL for data processing.
  • Hands-on experience with Spark (PySpark) preferably in production or large-scale environments..
  • Understanding of Lakehouse concepts and formats such as Delta Lake.
  • Experience working with cloud-based data platforms, ideally AWS (S3, Glue, Athena, Redshift).
  • Experience building or maintaining Airflow DAGs.
  • Knowledge of data modeling for analytics (fact/dimension, incremental tables, snapshots).
  • Exposure to data quality checks and monitoring practices.
  • Familiarity with data governance, cataloging, and lineage concepts.
  • Ability to work independently on well-defined tasks and collaborate effectively within a team.

What do we value the most?

  • Ownership, results-driven mindset
  • Feel comfortable with dynamic changes as well as high speed growth
  • Team player
  • Empathy and willingness to learn and grow

Thank you for reading and we hope to meet soon!