BUSINESS UNIT OVERVIEW
Enterprise Technology Operations (ETO) is a Business Unit within Core Engineering focused on running scalable production management services with a mandate of operational excellence and operational risk reduction achieved through large scale automation, best-in-class engineering, and application of data science and machine learning. The Production Runtime Experience (PRX) team in ETO applies software engineering and machine learning to production management services, processes, and activities to streamline monitoring, alerting, automation, and workflows.
TEAM OVERVIEW
The Machine Learning & Analytics team in PRX uses generative AI, deep learning, predictive modelling, anomaly detection, time series forecasting and other statistical modelling techniques on big and fast data to reduce the risk and cost of managing the firm’s massive compute infrastructure and applications.
ROLE AND RESPONSIBILITIES
The responsibilities of an individual in this role include:
• Understanding the problem space to identify high impact business problems and formulate them as machine learning or statistical modelling tasks.
• Developing performant, scalable, and resilient production-grade models compliant with model development standards.
• Collaborating closely with other application developers and data engineers to design and build robust data pipelines and deployment frameworks.
• Deploying models in production and monitor model performance to ensure delivery of desired business impact.
• Effectively communicating the impact and the complexity of these models in a simple manner broadly.
QUALIFICATIONS
A Bachelor’s degree (Masters/ PhD preferred) in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline), with 6+ years of experience as an applied data scientist (or equivalent).
ESSENTIAL SKILLS
• Strong analytical and problem solving skills along with strong understanding of applied statistics and fundamental ML principles and techniques.
• Ability to apply fundamental algorithms and data structures to efficiently solve computational problems.
• Working knowledge of more than one programming language (Python, R, Java, C++ etc).
• Hands-on experience with any Open-Source distributed data processing (like streaming, in-memory computation, distributed storage, etc) frameworks (eg. Apache Kafka, Hazelcast, Apache HBase, Apache Spark etc.) is a plus.
• Hands-on experience with Generative AI and deep learning frameworks like Tensorflow, Pytorch is a plus.
• Ability to stay commercially focused and to always push for quantifiable commercial impact.
• Strong work ethic, a sense of ownership and urgency.
• Ability to collaborate effectively across global teams and communicate complex ideas in a simple manner.
• Proven track record of successfully leading projects and fostering a collaborative environment through effective mentorship.