Lead Technical Data Engineer
- Employer
- OCBC Bank
- Location
- Singapore, Singapore
- Salary
- Competitive
- Closing date
- Dec 18, 2022
View more
- Job Function
- Other
- Industry Sector
- Finance - General
- Employment Type
- Full Time
- Education
- Bachelors
You need to sign in or create an account to save a job.
Bank of Singapore is currently looking for a qualified candidate to assist the Data Engineering Team's operational and analytical needs. The Lead Technical Data Engineer will work alongside Data Analytics and big data platform (Hadoop) and engineering team to provide data related support, data ingestion, data interface for unstructured/structured data, data analytics and data management. This person will also assist in building interfaces from various upstream systems and ingest the data into Cloudera Hadoop data store and build the enterprise virtualization tool. This is a great opportunity for someone who is interested in innovative group with the possibility of tremendous career development in data engineering, big data management, data analytics and enterprise data visualization tool.
Looking hands-on technical expert with proficiency in Data Modelling, Visualization / Reporting to implement analytics projects in Wealth management and financial services industry.
A little more about this role:
As our Lead Technical Data Engineer, you will be instrumental in big data coding and work in Hadoop-ecosystem. This is a brand-new position at Data Competency vertical.
Qualifications
Mandatory Skillset:
What you'll need to have:
Looking hands-on technical expert with proficiency in Data Modelling, Visualization / Reporting to implement analytics projects in Wealth management and financial services industry.
A little more about this role:
As our Lead Technical Data Engineer, you will be instrumental in big data coding and work in Hadoop-ecosystem. This is a brand-new position at Data Competency vertical.
- Perform extensive structure/unstructured data ingestion into Hadoop
- Strong knowledge of Apache Hadoop/Spark, Anaconda/Python, Data visualisation BI tool, Java/ Scala, HIVE with hands on experience
- Ability to organize and lead meetings with business and operational data owners
- Experience in integrating data processes with architecture requirements used across company
- Understand Hadoop-ecosystem and Data Engineering activities as well as loading data from several disparate datasets and documentation
- Strong ability to troubleshoot and resolve data issues
- Analytical skill to perform data profiling and data visualization
- Experience in Agile and Waterfall framework
- Work closely with engineering and operations to document business processes
- Work independently and with team members to understand database structure and business processes
- Help form data management and governance processes within the data engineering team
Qualifications
Mandatory Skillset:
- Degree in Computer Science or Information Systems.
- Techno-functional skills of 3+ years on wealth management (especially finance and data)
- At least 14+ years of hands-on experience in analytics space - specifically Data modelling, Business Intelligence (BI) projects / Visualization / Reporting
- Team management experience of at least 5 team members and established centre of excellence
- Strong senior stakeholder management capabilities in strategizing solution to achieve business outcomes
- Hands-on experience in analysing data models and designing and coding efficient data structure for BI layer and visualization
- Hands-on experience on the Hadoop/Spark ecosystem, HDFS, HIVE, Spark
- Strong experience in at least one or more languages: Java, Python ,Scala
- Proficiency in developing data visualizations using BI technologies like Tableau and PowerBI;
- Excellent querying skills using SQL and experienced in data modelling
- Strong programming skills in PL/SQL
- Experienced in structured data integration and transformation using ETL, ELT processes.
- Familiar with version control and automation infrastructure using container / docker technologies.
- Excellent written / oral communication skills.
- Proactive team player, working collaboratively with and through others.
- Develop and implement technical best practices for data movement, data quality, data cleansing, and other ETL-related activities. Develop and ensure adherence to locally defined standards for all developed components.
- Analyze source system data to assess data quality. Work with technical team to determine strategies for handling data anomalies that are identified. Design ETL processes and develop source-to-target data mappings, integration workflows, and load processes.
- Able to read Data Models to translate them into ETL loads into Data Warehouse.
What you'll need to have:
- Graduate degree in statistics, math, computer science, physics or other technical related fields; Master's degree is preferred
- Hands on experience or knowledge of minimum one mainstream cloud infrastructures: AWS, MS Azure and GCP; ability to implement data lake.
- Passionate about data and analyzing business needs
- Previous experience on a data team in an agile environment preferred
- Develop in-depth plans and major milestones that must be approved by top management during the planning and design phases of the project.
You need to sign in or create an account to save a job.
Sign in to create job alerts
Sign in or create an account to start creating job alerts and receive personalised job recommendations straight to your inbox.
Create alert