Big Data Engineer, DD&T, Chengdu
応募 後で応募 求人ID R0158886 掲載日 07/23/2025 Location:Chengdu, ChinaBy clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge.
Job Description
Responsibilities:
- Develop and maintain scalable data pipelines, and build new integrations using AWS native technologies to support increasing data sources, volumes, and complexity.
- Collaborate with analytics and business teams to improve data models that enhance business intelligence tools and dashboards, fostering data-driven decision-making across the organization.
- Implement processes and systems to ensure data reconciliation, monitor data quality, and ensure production data is accurate and available for key stakeholders, downstream systems, and business processes.
- Write unit, integration, and performance test scripts, contribute to engineering documentation, and maintain the engineering wiki.
- Perform data analysis to troubleshoot and resolve data-related issues.
- Work closely with frontend and backend engineers, product managers, and analysts to deliver integrated data solutions.
- Collaborate with enterprise teams, including Enterprise Architecture, Security, and Enterprise Data Backbone Engineering, to design and develop data integration patterns and models supporting various analytics use cases.
- Partner with DevOps and the Cloud Center of Excellence to deploy data pipeline solutions in Takeda AWS environments, meeting security and performance standards.
Skills and Qualifications:
- Bachelor’s Degree from an accredited institution in Engineering, Computer Science, or a related field.
- 3+ years of experience in data engineering, software development, data warehousing, data lakes, and analytics reporting.
- Strong expertise in data integration, data modeling, and modern database technologies (Graph, SQL, No-SQL) and AWS cloud technologies (e.g., DMS, Lambda, Databricks, SQS, Step Functions, Data Streaming, and Visualization).
- Extensive experience in DBA, schema design & dimensional modeling, and SQL optimization.
- Excellent written and verbal communication skills, with the ability to collaborate effectively with cross-functional teams.
- Understanding of good engineering practices (DevSecOps, source-code versioning).
- Fluency in English
Nice to Have:
- Experience with streaming technologies like Spark Streaming or Kafka
- Infrastructure as Code (IaC) experience, preferably with Terraform.
- Experience designing and developing API data integrations using SOAP / REST.