A global tech-services and talent-solutions company with expertise in industrial AI, data science, software, and more.
August 2
🏡 Remote – New York
Airflow
Amazon Redshift
Apache
AWS
Azure
Cloud
Docker
ETL
Google Cloud Platform
Informatica
Java
Kafka
Open Source
PySpark
Python
Scala
Spark
SQL
Vault
Go
A global tech-services and talent-solutions company with expertise in industrial AI, data science, software, and more.
• A Data Engineer has the primary role of designing, building, and maintaining scalable data pipelines and infrastructure to support data-intensive applications and analytics solutions. • They closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. • From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions.
• Proven experience as a Data Engineer, or in a similar role, with hands-on experience building and optimizing data pipelines and infrastructure. • Proven experience working with Big Data and tools used to process Big Data • Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues. • Solid understanding of data engineering principles and practices. • Excellent communication and collaboration skills to work effectively in cross-functional teams and communicate technical concepts to non-technical stakeholders. • Ability to adapt to new technologies, tools, and methodologies in a dynamic and fast-paced environment. • Ability to write clean, scalable, robust code using python or similar programming languages. Background in software engineering a plus.
Apply Now