Data Engineer
The Role:
We are seeking a highly motivated and experienced Data Engineer to join our Data Engineering team. In this role, you will be at the forefront of designing and developing scalable, robust data architectures and solutions utilizing the latest technologies from Google Cloud Platform (GCP) and AWS. You will collaborate closely with cross-functional teams to understand their data needs and will focus on building, optimizing and scaling data platform solutions that drive insights for marketing strategies, personalization efforts and operational efficiencies. As a senior member of the team, you will work closely with data scientists, machine learning engineers, data analysts and cross-functional teams, playing a critical role in shaping the company's data architecture.
Responsibilities:
- Design, develop, and maintain scalable, high-performance data infrastructure to support the collection, storage, and processing of large datasets in real time and batch modes.
- Build reliable, reusable services and APIs that allow teams to interact with the data platform for ingestion, transformation, and querying of data.
- Develop internal tools and frameworks to automate and streamline data engineering processes.
- Collaborate with senior management, product management, and other engineers in the development of data products
- Develop tools to monitor, debug, and analyze data pipelines
- Design and implement data schemas and models that can scale
- Mentor team members to build the company's overall expertise
- Work to make our company an innovator in the space by bringing passion and new ideas to work every day
Requirements:
- At least 5 years of proven experience as a Data Engineer in developing platform level capabilities for a data-driven midsize to large corporations.
- Strong object-oriented programming skills in languages such as Python, Java or Scala, with experience building large-scale, fault-tolerant systems.
- Experience with cloud platforms (GCP, AWS, AZURE) with strong preference to GCP.
- Experience with BigQuery or similar (Redshift, Snowflake, other MPP databases)
- Experience building data pipelines & ETL
- Experience with command line, version control software (git)
- Excellent communication and collaboration skills.
- Ability to work independently and quickly become productive after joining.
Preferred Requirements:
- Knowledge of distributed data processing frameworks such as Apache Kafka, Flink, Spark, or similar.
- Experience with DBT (Data Build Tool) and Looker.
- Experience with machine learning pipelines or MLOps.
- Category
- Technology
- Locations
- Brazil, Remote - LatAm
- Remote status
- Fully Remote
- Employment type
- Full-time
Data Engineer
Loading application form