Skip to main content

This job has expired

You will need to login before you can apply for a job.

Independent Data Engineer

Employer
Cognizant
Location
Melbourne, Australia
Salary
Competitive
Closing date
Feb 14, 2025
View more categoriesView less categories
Job Function
Other
Industry Sector
Finance - General
Employment Type
Full Time
Education
Bachelors
What makes Cognizant a unique place to work? The combination of rapid growth and an international and innovative environment! This is creating many opportunities for people like YOU - people with an entrepreneurial spirit who want to make a difference in this world.

At Cognizant, together with your colleagues from all around the world, you will collaborate on creating solutions for the world's leading companies and help them become more flexible, more innovative, and successful. Moreover, this is your chance to be part of the success story.

Position Summary
  • Design, develop, and implement software solutions for data integration, processing, and transformation across various platforms.
  • Utilise advanced programming techniques and frameworks to build scalable and maintainable applications.
  • Employing data integration technologies across different configuration layers within the project.
  • Database design and data migration across Oracle and Microsoft SQL Server Management Studio work platforms.
  • Using Data Vault 2.0 data & process modelling techniques to clarify data flow and requirements for system application development.
  • Data model analysis and implementation in Erwin Data Intelligence work platform.
  • Implement ADF pipeline across different repos in Microsoft Azure Data Factory.
  • Deployment data & pipelines in Visual Studio and monitor pipelines in Azure DevOps.
  • Mentor junior engineers and contribute to technical decision-making processes.
  • Collaborate with cross-functional teams using Agile/Scrum methodologies, participating in sprint planning, daily stand-ups, and retrospectives.
  • Implement and maintain robust testing frameworks using Azure Data Factory Pipelines and Azure SQL, including unit, integration, and performance testing, to ensure software reliability.
Mandatory Skills
  • Bachelor's degree in computer science, Information Technology, Software Engineering, or a related field.
  • Experience in designing and developing scalable software solutions using modern programming languages.
  • Proficiency in SQL and Data Querying: Minimum 5+ years of experience working with SQL in professional environments. This includes writing optimized queries, designing relational database schemas, and working with complex joins, subqueries, and aggregations.
  • Experience with Data Warehousing and Cloud Platforms: Minimum 5+ years of hands-on experience managing and designing data warehouses, working with cloud storage (especially Azure), and integrating cloud-based tools for data processing.
  • Advanced Level Proficiency with Data Pipeline Technologies: Minimum 5+ years of experience working with data pipeline tools or cloud-native services, the ability to orchestrate, automate and monitor data pipelines is critical, as well as troubleshooting and optimizing pipelines for scalability and fault tolerance.
  • Proficient in data modelling concepts (Kimball, Inmon, Data Vault 2.0) with experience with Azure SQL.
  • Skilled in unit integration, and performance testing to ensure software reliability.
  • Familiar with medallion architecture principles and best practices.
  • Experience with data validation, testing and deployment, including Unit Testing, Integration Testing, Data Quality Testing, Performance Testing and skilled in Continuous Integration and Continuous Deployment (CI/CD).
Roles And Responsibilities
  • Write efficient, maintainable code and optimise software applications for performance and scalability.
  • Design and implement software solutions to facilitate historical data migration and application development.
  • Data integration and transformation, including writing complex SQL queries or using data transformation frameworks to clean and enrich data.
  • Data pipeline building, maintaining and development, including pipelines automating the process of collecting, processing, and transforming raw data into a usable format for analysis.
  • Implement solution to migrate the historical data from Oracle to Azure SQL platform using Azure Data Factory.
  • Develop software systems ensuring data quality, reliability, and consistency across all modules.
  • Design and implement Data Quality Framework solution to ensure data accuracy and reliability.
  • Data quality and validation, including implementing checks and validations within the data pipeline to monitor the accuracy, consistency, and completeness of the data.
  • Data testing and deployment, including verifying that the data pipeline (or data infrastructure) is correctly ingesting, processing, and outputting data, and continuous integration and continuous deployment (CI/CD).
  • Cloud infrastructure and automation, including managing Azure Cloud Services that host databases, storage, and compute resources and realizing automation of data workflows and processes.
Salary Range: >$100,000
Date of Posting: 15/01/2025
Next Steps: If you feel this opportunity suits you, or Cognizant is the type of organization you would like to join, we want to have a conversation with you! Please apply directly with us.

For a complete list of open opportunities with Cognizant, visit http://www.cognizant.com/careers . Cognizant is committed to providing Equal Employment Opportunities. Successful candidates will be required to undergo a background check.

Sign in to create job alerts

Sign in or create an account to start creating job alerts and receive personalised job recommendations straight to your inbox.

Create alert