VetJobs - The Leading Military Job Board

Job Information

Meta Research Scientist, GenAI - Multimodal Audio (Speech, Sound and Music) in New York, New York

Summary:

The GenAI org at Meta builds industry leading LLM and multimodal generative foundation models, which sets the industry benchmark of open source foundation models and enables many Meta products.The team is working on the industrial leading research on multimodal generative foundation models with a focus on the audio modality (including speech, sound and music). The team is working closely with the language and the vision research teams, and is collaborating with product teams in bringing the results to benefit billions of Meta users around the world.

Required Skills:

Research Scientist, GenAI - Multimodal Audio (Speech, Sound and Music) Responsibilities:

  1. Full life-cycle research on multimodal generative foundation models with a focus on the audio modality, including bringing up ideas

  2. Designing and implementing models and algorithms

  3. Collecting and selecting training data, training / tuning / scaling the models, evaluating the performance, open sourcing and publication

  4. Work together with collaborating teams (e.g. language and vision) to leverage each other and deliver the high-level goals.

Minimum Qualifications:

Minimum Qualifications:

  1. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.

  2. Solid track record of research in the audio (speech, sound, or music) or vision (image or video) domains. Can be publication records or unpublished industrial experience.

  3. PhD degree in the related field with 3+ years of experience, or BS degree with 5+ years of industrial research experience in the related field.

  4. Related research fields: audio (speech, sound, or music) generation, text-to-speech (TTS) synthesis, text-to-music generation, text-to-sound generation, speech recognition, speech / audio representation learning, vision perception, image / video generation, video-to-audio generation, audio-visual learning, audio language models, lip sync, lip movement generation / correction, lip reading, etc.

  5. Proven knowledge in neural networks.

  6. Experienced in one of the following popular ML frameworks: Pytorch, Tensorflow, JAX.

  7. Experienced in Python programming language.

  8. Solid communication skills.

Preferred Qualifications:

Preferred Qualifications:

  1. Solid publication track record in related fields.

  2. Solid experience in either of the following: audio dataset curation, model scaling, audio generation model evaluation.

  3. Experienced in large-scale data processing.

  4. Experienced in solving complex problems involving trade-offs, alternative solutions, cross functional collaboration, taking into account diverse points of views.

Public Compensation:

$177,000/year to $251,000/year + bonus + equity + benefits

Industry: Internet

Equal Opportunity:

Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.

Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.

DirectEmployers