Overall Purpose: This career step requires career level experience. Responsible for the performance of any/all functions involved in the development and/or maintenance of new or existing applications. Key Roles and Responsibilities: Using programming language and technology, writes code, completes programming and documentation, and performs testing and debugging of applications. Analyzes, designs, programs, debugs and modifies software enhancements and/or new products used in local, networked, or Internet-related computer programs. May interact with users to define system requirements and/or necessary modifications.
Job responsibilities include:
Performing Databricks, Data pipelines and DB Monitoring and Response
Database Problem Management/Resolution – both VM based and Azure PaaS
Change & Configuration Management, Implementation of change, Deployment, Performance Tuning of Databases and Azure Databricks, Spark Jobs and Clusters
Scheduled maintenance of Clusters & topology changes in line with business requirements
Storage account (Data Lake, Blob, File Share) maintenance and housekeeping
Service Assurance Management, Status, and Metrics collection/reporting
Certificate renewal and regular updating of DB credentials
Key Roles & Responsibilities
FAULT MANAGEMENT - Technical support and troubleshoot Kepler platform & service issues including:
alarm and KPI monitoring (proactive and reactive).
investigation/diagnose ETL, Data streaming pipelines & Database issues.
triage and communicate event status and coordinate RCA.
mitigate/remediate tier2 technology fault conditions that span multiple production clusters.
implement restoration of Database, Event Streaming and Data transformation workflows.
may involve in the implementation of fixes or design changes.
solves incidents and Call to Work (using PagerDuty) independently and works with additional support teams, engineering and vendors as needed.
proactively monitor and maintain configurations to achieve designed performance and reliability levels using key performance indicators (KPIs).
CHANGE MANAGEMENT
Manage, change and track ETL and Data streaming pipelines on Spark and Azure Databricks
Manage, coordinate, track, report on any activities done in production teams.
Create AOTS-CM, ensure activities in production are within GTOC compliancy.
Manage maintenance windows, working with team to implement desired changes.
Own and manage the change management process for activities on Kepler platform, understand existing processes.
TECHNOLOGY INSERTION AND PRODUCT DEVELOPMENT
Partnering with Labs, product development, and engineering teams to create process documentation, methods, procedures, tools and guidelines for current Kepler roadmap, capacity management and create/execute use/test cases/Synthetic testing during new technology instantiation.
Participates in collaboration sessions with engineering and vendors on product capabilities and improvements.
Reviews and interprets technical documentation and specs for inclusion/adaptation into database templates.
May interact virtually explaining root cause of issues if any.
Collaborate with upstream and downstream partners to conduct change management procedures.
Develop applications, scripts, and act as system DBA (database administration), building knowledge database SharePoint and internal messaging BOTs while maintaining other key critical software/data platforms.
Knowledge of and the ability to use tools and techniques for analyzing and documenting logical relationships among logs, traces, data, processes or events translating business problems into insights.
Ensure that the environments where Kepler DBs and Data pipelines are deployed stay continuously in compliance, with no security issue (i.e., implement/deploy security fixes, version upgrades, renew certificates on time, etc.)
Ensure 24x7x365 availability of platform. Flexible to work in 3 rotational India shifts covering morning, afternoon & night with monthly rotation.
AUTOMATION
Perform automation functions through scripting and programming to bring efficiency to support and monitoring functions.
Perform data analysis on platform and or organizational metrics using industry standard analytics and data presentation tools.
Enhance existing pro-active monitoring and issue detections functionalities associated to Kepler.
Automate scheduled start up and shutdown of services basis business needs.
Education
Preferred bachelor’s degree in computer sciences, Engineering or Operations
Preferred Experience: 4-8 years
Must Have Certifications: DP-900 (Azure Data Fundamentals)
Required Skills (Keywords): DP-203, DP-300, DP-900, Azure Databricks, Data Pipelines, Data Streaming, ETL, Spark, Data Processing, Azure Data Lake, NoSQL, Cosmos DB, Solr, Azure SQL, Data warehousing, EventHubs, Cassandra
#SoftwareEngineering
Weekly Hours:
40
Time Type:
Regular
Location:
Hyderabad, Andhra Pradesh, India
It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities.
Here's what we've been up to with Buckley Space Force. As the only network made with, and for, America's first responders, we go inside how we've transformed mission capabilities.
Learn more
September 19, 2024ArticleGovernmentRelated Content
This one's for the grads and early careerists: Our leading internship and development program recruiters weigh in on how to prepare for and handle your interview.
Learn more
September 19, 2024ArticleCareer AdviceRelated Content
Go behind the scenes of our Fiber Sales team. An executive walks us through career growth, commission structure, and why a career with AT&T is more than just a job.