You have exhausted your free quota. Please become a paid member and apply to unlimited jobs! A referral from an employee increase your chances by 80%. interviewChacha has helped 27000+ Job seekers land a job in last 1 year. Your money is what keeps this platform running.
Send your resume directly to and ask for referral.
Mention "interviewchacha.com" when you apply.
**You can't Quick Apply to moderator posted jobs.
Applying to AMD: need details
Job Description
The CPU inferencing team at AMD is hiring technologists with expertise in PyTorch, TensorFlow, ONNX-RT, and optimizing ML models for performance!
If you're interested in doing cutting-edge work in a dynamic team,
please use link to Apply Button to Apply.
THE ROLE:
AMD India is hiring talented software engineers and architects to work on ZenDNN, our flagship CPU inferencing product. The role involves performance optimization of state-of-the-art machine learning workloads on our CPU servers in opensource ML frameworks.
THE PERSON:
We are looking for smart, creative people who have a passion for solving complex problems. The ideal candidate has a strong background in optimizing ML models and libraries for performance, preferably on x86 architectures. Expertise in opensource ML frameworks such as PyTorch, TensorFlow and ONNX-Runtime is very relevant. The candidate should also have a solid understanding of SW quality and processes.
REQUIRED SKILLS
Strong background in machine learning fundamentals, including deep learning, large language models, and recommender systems.
Experience working with opensource frameworks such as PyTorch, TensorFlow and ONNX-Runtime.
Experience in profiling ML workloads and optimizing them for performance.
Strong background in C++ and python programming.
Excellent problem-solving skills and willingness to think outside the box.
Experience with production software quality assurance practices, methodologies, and procedures.
Excellent communication skills and experience working with global teams.
KEY RESPONSIBILITIES:
Development of ZenDNN, our flagship CPU inferencing product.
Innovating to improve performance and customer ease-of-use on state-of-the-art ML workloads
PREFERRED EXPERIENCE:
Exposure to compiler technology including LLVM
Scripting languages such as Perl or Python
ACADEMIC CREDENTIALS:
Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, or Computer Engineering, or equivalent with at least 8 years of experience
Prerna just got her resume reviewed and career guidance from a Principal Engineering Manager at Microsoft! See how.