Roles & Responsibilities: - Understand business requirement and actively provide inputs from Data perspective. - Understand the underlying data and flow of data. - Build simple to complex pipelines & dataflows. - Should be able to implement modules that has security and authorization frameworks. - Recognize and adapt to the changes in processes as the project evolves in size and function.
- To be an owner of the Data Integration pipeline. - Bring in Data integration standards and implement the same. - Build Dataflows, workflows and have job fail over design. - Build Re-usable assets and framework components.
Knowledge, Skills & Abilities: - Expert level knowledge on Azure Data Factory. - Advance knowledge of Azure SQL DB & Synapse Analytics, Power BI, T-SQL, Logic Apps , Function Apps. - Should be able to analyze and understand complex data.
- Monitoring day to day Data factory pipeline activity.
- Designing, configuring, and managing pipelines to orchestrate data workflows.
- Implementing different types of activities such as Copy Activity, Data Flow, Databricks Activity, and Control Flow activities.
- Connecting to and integrating on-premises data sources using Self-hosted Integration Runtime.
- Setting up and managing triggers (Schedule, Event, Manual) to automate pipeline executions.
- Configuring linked services to connect to various data stores and defining datasets for data structures. - Knowledge of Azure data lake is required and Azure Services like Analysis Service, SQL Databases, Azure DevOps, CI/CD is a must. - Knowledge of master data management, data warehousing and business intelligence architecture. - Experience in data modeling and database design with excellent knowledge of SQL Server best practices. - Excellent interpersonal/communication skills (both oral/written) with the ability to communicate at various levels with clarity & precision. - Should have clear understanding of DW lifecycle and contribute in preparing Design documents, Unit Test plans, Code review reports. - Experience working in Agile environment (Scrum, Lean, Kanban) is a plus - Knowledge of Big data technologies - Spark Framework, NoSQL, Azure Data Bricks , Python, Snowflake, Jupiter Note Working knowledge, R- Programming
- Knowledge on various file systems and recommend based on design. - MPP Design and recommend design for optimal cluster utilization. - Expert in python and pyspark.
Qualifications & Experience: - Bachelor's or master's degree in computer science or related field. - At least 6-10 years of Data engineering or Software development experience.
#SoftwareEngineering
Weekly Hours:
40
Time Type:
Regular
Location:
Bangalore, Karnataka, India
It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities.
Here's what we've been up to with Buckley Space Force. As the only network made with, and for, America's first responders, we go inside how we've transformed mission capabilities.
Learn more
September 19, 2024ArticleGovernmentRelated Content
This one's for the grads and early careerists: Our leading internship and development program recruiters weigh in on how to prepare for and handle your interview.
Learn more
September 19, 2024ArticleCareer AdviceRelated Content
Go behind the scenes of our Fiber Sales team. An executive walks us through career growth, commission structure, and why a career with AT&T is more than just a job.