VetJobs - The Leading Military Job Board

Job Information

Google ML Compiler Engineer, XLA TPU Compiler Horizontal Scaling in Sunnyvale, California

Minimum qualifications:

  • Bachelor’s degree or equivalent practical experience.

  • 2 years of experience with software development in one or more programming languages (e.g., C++), or 1 year of experience with an advanced degree.

  • 2 years of experience with data structures or algorithms.

  • 2 years of experience with performance optimization, systems data analysis, visualization tools, or debugging.

  • Experience using compilers in software engineering.

Preferred qualifications:

  • Master's degree or PhD in Computer Science or related technical fields.

  • Experience as an engineer in compilers.

  • Machine Learning, Parallel Computing and High Performance Computing (HPC) experience.

  • Experience optimizing programs at distributed scale.

  • Experience debugging and programming concurrent/parallel computations.

  • Experience debugging correctness and performance issues at all levels of the stack.

Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.

Our team develops the XLA TPU compiler used to partition, optimize, and run large machine learning models across multiple TPU devices for internal customers (e.g., Gemini, Ads, Search), and external Google Cloud customers. Gemini is using our compiler for all phases of Gemini model development (e.g., pre-training, fine-tuning, serving).

The XLA Horizontal Scaling Team’s software stack includes the XLA SPMD partitioner, collective and scheduling optimizations, and code generation on TPU hardware. We also develop Megascale XLA, which enables communications between devices over Data Center Network (DCN), used to scale out to even larger machine learning models.

Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.

The US base salary range for this full-time position is $161,000-$239,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google (https://careers.google.com/benefits/) .

  • Contribute to a compiler which scales-out machine learning models across accelerators (e.g. TPU/GPU) at Google and Google Cloud.

  • Engage with important production teams (e.g. Gemini), understand requirements, and analyze performance opportunities.

  • Implement critical features and performance improvements, and resolve critical production issues to increase production team velocity.

  • Work closely with users of TPUs to improve performance/efficiency and efficiently compile programs for large-scale machine learning model training and inference across distributed accelerator devices.

Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also https://careers.google.com/eeo/ and https://careers.google.com/jobs/dist/legal/OFCCPEEOPost.pdf If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form: https://goo.gl/forms/aBt6Pu71i1kzpLHe2.

DirectEmployers