
Data Engineer - AWS/PySpark
At Barclays, we are constantly pushing the boundaries of innovation to provide the best financial services for our clients. As a Data Engineer with expertise in AWS and PySpark, you will be a crucial member of our team, responsible for designing, building, and maintaining our data infrastructure. Your contributions will directly impact our ability to make data-driven decisions and drive business growth. We are looking for a dynamic individual who is passionate about data and has a strong understanding of AWS and PySpark. If you have a desire to work in a fast-paced, collaborative environment and are driven by solving complex data challenges, we want to hear from you. Join us at Barclays and be a part of shaping the future of financial services.
- Design and implement data infrastructure using AWS and PySpark to support business needs and objectives.
- Collaborate with cross-functional teams to understand data requirements and design solutions that meet those needs.
- Develop and maintain data pipelines to ensure timely and accurate data delivery.
- Monitor and troubleshoot data issues, ensuring data quality and integrity.
- Continuously evaluate and improve existing data infrastructure to optimize performance and scalability.
- Stay up-to-date with industry trends and best practices in data engineering and AWS.
- Work closely with data scientists and analysts to provide them with the necessary data sets and resources for their analysis.
- Develop and maintain documentation for data infrastructure and processes.
- Collaborate with IT and security teams to ensure data infrastructure and processes adhere to company standards and regulations.
- Provide technical guidance and support to junior data engineers.
- Proactively identify and address potential data infrastructure issues and recommend solutions.
- Participate in code reviews and ensure coding standards are followed.
- Contribute to the development of data engineering best practices and standards.
- Communicate effectively with team members and stakeholders to provide updates on project progress and any potential roadblocks.
- Take ownership of data engineering projects and work independently to deliver high-quality results.
- Continuously seek opportunities to optimize and automate data processes for improved efficiency and accuracy.
- Uphold a culture of data-driven decision making by promoting and advocating for the use of data in decision making.
- Adhere to company policies and procedures, including those related to data privacy and security.
- Act as a subject matter expert on AWS and PySpark and provide training and support to team members as needed.
- Strive for excellence in all data engineering tasks and contribute to the overall success of the team and the organization.
Expertise In Aws Cloud Services: The Ideal Candidate Should Have A Strong Understanding Of Various Aws Services Such As Ec2, S3, Rds, Redshift, And Lambda. They Should Be Able To Design, Implement, And Maintain Data Pipelines And Workflows Using These Services.
Proficiency In Pyspark: The Candidate Should Have Hands-On Experience With Pyspark, A Popular Open-Source Framework For Large-Scale Data Processing. They Should Be Able To Write Efficient And Optimized Code To Manipulate And Analyze Large Datasets.
Knowledge Of Data Warehousing And Etl: A Strong Understanding Of Data Warehousing Concepts And Etl (Extract, Transform, Load) Processes Is Essential For This Role. The Candidate Must Be Able To Design And Implement Data Models And Perform Data Transformations Using Various Tools And Technologies.
Familiarity With Agile Methodologies: Barclays Follows An Agile Methodology For Its Software Development Processes. The Candidate Should Have Experience Working In An Agile Environment And Be Able To Collaborate Effectively With Cross-Functional Teams.
Strong Problem-Solving And Analytical Skills: As A Data Engineer, The Candidate Will Be Responsible For Troubleshooting And Resolving Complex Data-Related Issues. They Should Possess Strong Problem-Solving And Analytical Skills To Identify And Fix Issues In A Timely Manner.
Data Analysis
Data Modeling
Machine Learning
Data warehousing
Data Visualization
Cloud Computing
Big data processing
ETL Development
Python programming
SQL querying
Data Pipeline Automation
Communication
Conflict Resolution
Emotional Intelligence
Leadership
Time management
creativity
Teamwork
Active Listening
Adaptability
Problem-Solving
According to JobzMall, the average salary range for a Data Engineer - AWS/PySpark in Chennai, Tamil Nadu, India is between ₹5,00,000 and ₹15,00,000 per year. However, this can vary depending on factors such as experience, skills, and the specific company hiring for the role. Some companies may offer higher salaries or additional benefits such as bonuses, stock options, or relocation packages. It is important to research the specific company and their salary range for the role before negotiating a salary.
Apply with Video Cover Letter Add a warm greeting to your application and stand out!
Barclays plc is a British multinational investment bank and financial services company, headquartered in London. Apart from investment banking, Barclays is organised into four core businesses: personal banking, corporate banking, wealth management, and investment management.

Get interviewed today!
JobzMall is the world‘ s largest video talent marketplace.It‘s ultrafast, fun, and human.
Get Started