hero

Career Central

Connecting people since 1887
Leverage our network to build your career.
Tell us about your professional DNA to get discovered by any company in our network with opportunities relevant to your career goals.

Software Engineer III- Big Data & ML

JPMorganChase

JPMorganChase

Software Engineering, Data Science
Hyderabad, Telangana, India
Posted on Tuesday, May 28, 2024

Job Description

There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems.

As a Software Engineer III at JPMorgan Chase within the Employee Compute, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

  • Develop, test, and maintain data and analytics needed for risk and finance models
  • Process analysis and process improvement,
  • Create and maintain technical documentation,
  • Contribute to the group’s knowledge base by finding new and valuable ways to approach problems and projects.
  • Deliver high-quality results under tight deadlines
  • Experience manipulating and summarizing large quantities of data.
  • Knowledge of the consumer lending lifecycle, including loan origination, sale/servicing, default management/loss mitigation
  • Manage multiple deliverables to different business groups and ability to build stronger customer relationships.
  • Collaborate with the appropriate individuals (LOB users, Subject Matter Experts, Architects, DBAs, etc.) to design and implement the appropriate solution
  • Working with System administrators, users, and other development team to manage enhancements and issues

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • 8+ years of professional development experience on any of the programming language e.g. pySpark, scala, Python, big data tech stack, unix, shell scripting etc
  • Solid understanding of object oriented design, data structures, and algorithms
  • Should have good hands on experience in working with Hadoop Distribution platform like Cloudera, Hortonworks, and advanced knowledge and experience in Bigdata Hadoop components Spark, Hive and Impala.
  • Should have experience in managing large Machine Learning Platform with petabytes of storage to develop and train the Machine Learning Models using Python, and support advanced Machine Learning tools like XgBoost, Tensorflow, Dask and Anaconda Packages like Pandas, Mathplotlib, Numpy, Scipy etc.
  • Should have experience in creating the platform design and onboarding for Bigdata environment and managing a large cluster with huge user base to support the platform activities. Also should have experience in planning and onboarding infrastructure needs like Cluster Compute, VSIs, NAS Storage to support the busines use cases across different LOBs.
  • Should have experience in setting up the platform controls and governance process by designing user access controls, Hadoop permissions and Sentry roles, and Unix Keon and Sophia roles. Also should have experience in installing new tools or packages in line with firm wide control process for Machine Learning Model Development in Bigdata platform.
  • Should have good understanding and knowledge in design, development and support of the Machine Learning Models using the Graph Database. Should have knowledge in Graph database software installation, configuration and upgrades, and also in troubleshooting performance bottlenecks and query optimization.
  • Should have good understanding and knowledge in planning and managing AWS Cloud infrastructure, and enabling necessary platform security by using best in class Cloud Security Solutions. Also should have hands on experience in migrating Spark applications from Bigdata Hadoop platform to AWS Cloud using AWS EMR, S3 Buckets, EBS and EC2 Instances.
  • Should have experience in the platform monitoring tools like Geneos for monitoring the server utilization and in setting up the thresholds and performance alerts. Should have hands on experience in developing automation scripts using Python and Unix Shell Scripting.
  • Good oral and written communication skills

Preferred qualifications, capabilities, and skills

  • Degree in computer science or a numerate subject (e.g. engineering, sciences, or mathematics) or Bachelor's degree with 8 years of experience, or Master’s degree with 6 years of experience.