Skip to main content

求人検索

プロフィールを使用して検索

Data Engineering Engineer

応募 後で応募 求人ID R0158127 掲載日 08/07/2025 Location:Delegación Cuajimalpa de Morelos, Mexico

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge.

Job Description

Objective / Purpose

Describe at the highest level the team where this job sits and how this role will contribute to the team’s delivery of critical function. The Data Engineer will be a crucial member of the Data Science Institute, contributing to the development and optimization of data pipelines and architectures that support advanced data science initiatives. This role will enhance data accessibility, reliability, and efficiency across the organization.

Accountabilities

  • Assist in end-to-end data flow engineering and software development strategies.
  • Develop and manage data pipelines for extracting, transforming, and loading (ETL) data from various sources into data warehouses or data lakes.
  • Monitor the performance of data pipelines and infrastructure, identify bottlenecks, and optimize processes to improve efficiency and reliability.
  • Implement data observability practices to ensure data quality and publish relevant metrics to a catalog or repository.
  • Seek out new perspectives and opportunities to learn and apply skills to develop new talents.
  • Stay alert to industry trends, designs, and alternate views and approaches across technology, science, and operations.
  • Engage in software development, development tools, algorithms, and technologies related to data architectures, data engineering, and data science.
  • Utilize experience with complex analytic areas with diverse data and high dimensionality in life sciences or similarly complex areas.
  • Preferred experience with programming languages like Scala, Java, or Python and tools like Apache Spark, Apache Kafka, or Apache Airflow for building scalable and efficient pipelines.

Education & Competencies (Technical and Behavioral)

  • Bachelor's degree in Computer Science or equivalent.
  • 2+ years of relevant experience.
  • Foundational knowledge of computer science architecture, algorithms, and interface design.
  • Up-to-date specialized knowledge of data engineering, manipulation, and management technologies to affect change across business units, including an understanding of advanced methodologies of data and software development (life sciences experience preferred).
  • Ability to manipulate voluminous data with different degrees of structuring across disparate sources to build and communicate actionable insights for internal or external parties.
  • Software development skills and ability to contribute to the development of new data engineering and analytic services.
  • Knowledge in vibe coding is required to enhance data processing and pipeline efficiency.
  • Possess an attitude to learn and adapt to new technologies and methodologies, fostering continuous personal and professional growth.
  • Good knowledge of Apache Spark and Scala or Java is required for building scalable and efficient data pipelines.

Locations

MEX - Santa Fe

Worker Type

Employee

Worker Sub-Type

Regular

Time Type

Full time
応募 後で応募