Data Engineer

Job Type
8,000,000 JPY - 9,000,000 JPY per year
Japanese Level
Advanced (JLPT Level 1)
English Level
Advanced (TOEIC 860)
Start Date


Data engineers working in tha datalake team carry out a wide variety of business intelligence tasks in a largely AWS based cloud computing environment.


‐ Building high quality and sustainable data pipelines and ETL processes to extract data from a variety of APIs and ingest into cloud based services.
- Efficiently developing complex SQL queries to aggregate and transform data for analytics team and general users.
- Maintaining accurate and error-free data bases and datalake structures
- Conducting quality assessment and integrity checks on both new and existing queries and processes.
- Monitoring existing solutions and working pro-actively to rapidly resolve errors and identify future problems before they occur.
- Using data visualization tools such as Power BI, SSRS, Tableau, Looker etc to develop high quality dashboards and reports.
- Consulting with a variety of stakeholders to gather new project requirements and transform these into well-defined tasks and targets.




‐ イノベーションリードと協働して、エンタープライズデータ部が関わるプロジェクトおよび通常業務を推進
‐ データストラテジー部やビジネス部門の要望を受けて、ビジネスニーズや技術環境を分析し、ソリューションを起案
‐ データストラテジー部と共同で、データサイエンスPOCモデル(スコアリングモデル、機械学習、AIなど)を開発
‐ データエクセレンス本部で開発したPOCモデルをIT部門と共同でプロダクション環境に実装
‐ ビジネスユーザーが使用するBIのためのAWSデータレイクを設計し、IT部門と共同で管理
‐ ビジネスユーザーの要求に基づき、各ステークホルダー(内部部門、外部組織を含む)にデータを抽出し提供
‐ 各プロジェクトのリーダーがプロジェクトを進めるするにあたっての企画、設計、計画、実行管理を補佐

【会社概要 | Company Info】
Global general insurance group founded in Europe over 200 years ago. Operating in over 62 countries, with a 20+ year presence in Japan.

【就業時間 | Working Hours】
9:00 - 17:30(Mon - Fri)

Saturdays, Sundays, Year-end Holidays, Paid Holiday, Condolence Leave, etc.

給与評価年1回、賞与年2回、業績賞与有り、社会保険完備、屋内原則禁煙(屋外に喫煙所あり)、退職金制度、財形貯蓄制度、持株制度 など
Bonus (2x a year), Incentive based on performance, Social Insurance, Transportation Fee, Retirement Fund System, Savings System, Stock Options, No smoking indoors allowed (Designated smoking area), etc.

Required Skills

‐ Experience in data / analytics with at least 1 year working in an engineering / B.I role.
‐ Experience working on data pipelines or analytics projects with languages such as Python, Scala or Node.JS
‐ Experience working on data pipelines or analytics projects with SQL / NoSql databases (ideally in a Hadoop based environment).
‐ Strong knowledge and practical experience working with at least four of the following AWS services: (s3, EMR, ECS/EC2, Lambda, Glue, Athena, Kinises/Spark Streaming, Step Functions, Cloudwatch, Dynamo DB ).
‐ Strong Experience working with data processing and ETL systems such as Oozie, Airflow, Azkaban, Luigi, SSIS. 
‐ Experience developing solutions inside a Hadoop stack using tools such as (Hive, Spark, Storm, Kafka, Ambari, Hue etc). 
‐ Ability to work with large volumes of both raw and processed data in a variety of formats including (JSON, ORC, Parquet, CSV). 
‐ Ability to work in a Linux /Unix environment (predominately via EMR & AWS CLi / Hadoop File System).
‐ Experience with DevOps solutions such as (Jenkins, GitHub, Ansible, Docker, Kubernetes).


‐ プログラミング(JavaかPython)経験
‐ DWH構築経験、RDBやNoSQLデータベースのモデリング経験
‐ BIツール(Spotfire, Tableau, Power BI, Domo, QuickSightなど)を使った分析環境の構築経験
‐ AWSサービス(Kinesis, Lambda, EMR, ECSなど)を使った、高パフォーマンスを求められるシステムの構築経験
‐ 開発メソドロジー(ウォーターフォール, スクラム, XPなど)に従った開発経験

Preferred Skills

‐ Demonstrated experience and expertise on setting up and maintaining cloud data solutions and AWS infrastructure will be highly regarded.
‐ Strong knowledge of cloud based data security, encryption and protection methods will also be highly regarded.


‐ Hadoop Stack(HDFS, Hive, Spark, Storm, Kafka, Ambari, Hueなど)を使ったシステム構築経験
‐ DevOps環境(Jenkins, GitHub, Ansible, Docker, Kubernetesなど)の構築経験