Serving tens of millions of customers each year, Changi Airport aims to create memorable experiences by connecting lives. Whether it is growing air links to the world, enhancing experience at the various touchpoints, or curating personalized retail offerings, we are continually striving to better serve our customers at Changi Airport. To do so, we are enabled by technology, processes, data and the generation of insights.
Enterprise Data Science
Enterprise Data Science (EDS) comprises of a growing team of analytics professionals tasked with the mandate to help solve business problems across the enterprise. EDS leverages on data science capabilities and adopts a “data first” approach in distilling actionable insights to key stakeholders including senior management and business teams. EDS seeks to create positive impact and business value through data-driven decision-making.
We are looking for a savvy Data Engineer to join our vibrant team in Enterprise Data Science unit. You are an experienced data pipeline builder and data wrangler who enjoys optimising data ecosystems and building them from ground up. You will be working in a team with like-minded enthusiasts and primarily undertake ETL work to support the development of data products and data platforms across the enterprise.
- Assemble large, complex data sets to empower exploratory and operational analytics.
- Identify, design and implement internal process improvements to optimize data delivery and greater scalability
- Create and maintain data pipeline architecture for optimal extraction, transformation and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipelines to provide actionable insights into customer behavior, operational efficiency and other key business performance metrics.
- Collaborate with an internal data science community to solve complex problems and create business value
- Work closely with business stakeholders to assist with data-related technical issues and support their data infrastructure needs.
- Recommend ways to improve data reliability, efficiency and quality
- Degree in Computer Science, Computer Engineering, Information Systems or other quantitative / computational discipline.
- Strong experience and track record in building data pipelines using distributed processing frameworks (e.g. Hadoop) and MPP databases (e.g. Redshift, BigQuery)
- Experience in design and implementation of ETL solutions using one of the leading ETL packages (e.g. AWS Glue, Pentaho, Informatica)
- Solid software development skills in at least one major language (e.g. Java) and scripting languages (e.g. Python).
- Experience with advanced schema design and data modelling techniques such as normalization, SCD and star schemas.
- Proficient in writing advanced and optimized SQL queries.
- Familiar with Agile and DevOps approach.
- Strong project management and organizational skills supporting and working with cross-functional teams in a dynamic environment.
- Excellent communication skills (verbal, written, visual) to deliver insights (e.g. compelling presentations, easy to understand rationale) to senior management, internal and external stakeholders.
- Motivated and driven, able to work independently and a good team player as part of a multidisciplinary team.