Data Engineer

full timeengineeringdataremote FROM 🇦🇷 🇨🇴 🇩🇴 🇲🇽 🇵🇦 🇵🇪
Open to candidates in: Za, Argentina, Eg, Colombia, Do, Mexico, Panama, Ye, Peru
Hire Hangar
🏭 Staffing and Recruiting
📍 New York City, New York, US
👤 11-50

Join Hire Hangar and work with fast-growing global companies while building a long-term, remote career.

Job Title Data Engineer – Tech Industry

Location Remote

Time Zone US Time Zones (EST–PST)

Role Overview We are seeking a skilled and detail-oriented Data Engineer to design, build, and maintain scalable data infrastructure for a fast-growing tech company. This is a full-time role requiring deep technical expertise in data pipelines, warehousing, and architecture. The ideal candidate is analytical, collaborative, and comfortable working across large datasets while partnering with data science, product, and engineering teams to deliver reliable, high-quality data solutions.

Key Responsibilities

  • Design, build, and maintain robust, scalable data pipelines and ETL/ELT workflows

  • Develop and manage data warehousing solutions to support business intelligence and analytics needs

  • Ensure data quality, integrity, and availability across all data systems and sources

  • Collaborate with data scientists, analysts, and product teams to understand and fulfill data requirements

  • Optimize query performance and database architecture for speed and efficiency

  • Monitor, troubleshoot, and resolve data pipeline failures and incidents

  • Implement and enforce data governance, security, and compliance best practices

  • Document data models, processes, and infrastructure to support team knowledge sharing

Required Qualifications

  • 4+ years of experience as a Data Engineer or in a similar data infrastructure role within a tech company (non-negotiable)

  • Strong proficiency in SQL and at least one programming language such as Python or Scala

  • Hands-on experience building and managing ETL/ELT pipelines using tools such as Apache Airflow, dbt, or similar

  • Experience with cloud data platforms such as AWS, GCP, or Azure, and data warehouses such as Snowflake, BigQuery, or Redshift

  • Solid understanding of data modeling, schema design, and database optimization

  • Strong problem-solving skills with the ability to manage complex, high-volume data environments

  • Must have prior remote work experience, be fluent with remote collaboration tools and platforms (such as Slack, Zoom, Google Workspace, Asana, or similar), and have ideally worked with US or UK-based companies. Applications without this experience will not be considered.

Preferred Qualifications

  • Experience with real-time or streaming data pipelines using tools such as Kafka or Spark

  • Familiarity with data lakehouse architectures (e.g., Delta Lake, Apache Iceberg)

  • Exposure to machine learning workflows and supporting ML data infrastructure

  • Experience working in an Agile or DevOps environment with CI/CD practices applied to data workflows

Tools & Technology

  • Python, SQL, or Scala

  • Apache Airflow, dbt, or equivalent pipeline orchestration tools

  • Snowflake, BigQuery, or Redshift

  • AWS, GCP, or Azure

  • Kafka or Spark (advantageous)

  • Google Workspace

  • Slack, Zoom, Jira, and other remote collaboration tools

Please NOTE

It is crucial that you complete the application form in full. As part of the application process, you will be required to record a video. If your application is successful, you will receive an email confirming next steps—the video is the first step of the interview process. If you do not record a video, we will not be able to consider you for ANY open roles.

We connect top talent with vetted employers, competitive pay, and real growth opportunities.

Hire Hangar
🏭 Staffing and Recruiting
📍 New York City, New York, US
👤 11-50