Data Engineer (SQL/Python: 6028)
A Global IT Services company is looking for a Data Engineer (SQL/Python）to join their team!
-Ensure the reliability, availability, and performance of data platform infrastructure, system, and data storage
- Ensures successful and complete execution of all data processing batches
- Participate in incident response and post-mortems to identify root causes and prevent future incidents
- Diagnose and resolve data-related issues, working closely with team technical leader and other stakeholders to ensure minimal downtime.
- Manage collaboration with different stakeholders
- Manage Data platform infrastructure and system upgrades or migrations to comply with company policies
- Implement Automation
- Maintain comprehensive documentation (platform system architecture, data dictionary & infrastructure, configurations, processes, & so on)
- Data visualization report development
- Understand user requirements, analyze data, and discover business insights
- Data Modeling & Data Pipeline
Our client is a Global company with a strong presence across several business divisions. It has enjoyed sustained growth both domestically and overseas, including US and Europe. It proud itself of a diverse and international environment and a strong focus on equal opportunities, which translates into plenty of career opportunities. Given the variety of businesses, it deals with a strong variety of technologies so it is a great place to expand your skills!
Also, employees are welcome to choose their own setup (Windows/Mac, etc.); whatever makes them comfortable! Free meals are also provided at the company cafeteria. Their chefs work to create exciting new menus and dishes, so employees never get tired of the food!
9:00 - 17:30 (Mon-Fri)
Saturday, Sunday, and National Holidays, Year-end and New Year Holidays, Paid Holidays, Other Special Holidays
【Services / Benefits】
Social insurance, Skillhouse Benefit, No smoking indoors allowed (Designated smoking area), Commutation Allowance, Free Cafeteria, and Casual clothes are acceptable
- End-to-end experience with Data Engineering across SQL, and operational experience in Linux systems
- Experience in developing and building ecosystems using Python & Shell scripting
- Experience working on containerization and container orchestration such as Docker, Kubernetes, & etc.
- Batch-processing data pipelines (ETL) composed of either one tool Airflow, Digdag, Argo Workflow, Informatica, Autosys, Tidal, & etc.
- Application leveraging so-called "Big Data" systems (such as Hadoop, Hive, Spark), HDFS, Presto, etc.