Hi! I'm Hemlata Tak, a Senior Research Scientist at Pindrop, USA, with a Ph.D. in Computer Science from Sorbonne University, France, where I worked under the guidance of Prof. Nicholas Evans. My expertise lies in the voice authentication, deepfake detection, AI singing detection, and model attribution.

Professional Impact

In my role at Pindrop, I apply my expertise in machine learning and speech foundational models to build impectful deepfake detection solutions that improve detection accuracy in real world conditions. Key contribution include:

This experience reflects my commitment to leveraging advanced AI and machine learning to solve complex, large-scale challenges and to drive continuous improvement in deepfake detection performace.

Updates

  • October 2025: We're proud to announce that our product Pindrop Pulse for Meetings has been awared one of TIME's Best Inventions of 2025 award . I'm proud to be part of the excellent team to build such a transformative solution to detect fraud. Check out here.

  • August 2025: One paper accepted in ACM multimedia 2025 conference on Audio-visual deepfake detection and localization.

  • July 2025: We’re excited to announce that we secured 1st place in Task 2 deepfake localization, and 2nd place in Task 1 deepfake detection of the ACM Multimedia 2025 1M-Deepfake Detection Challenge on TestA and most challenging hidden TestB set.

  • June 2025: We’re excited to announce two special sessions at IEEE ASRU 2025 workshop: Responsible Speech and Audio Generative AI and Frontiers in Deepfake Voice Detection and Beyond. These sessions will explore cutting-edge research in Trustworthy AI, including themes of fairness, interpretability, and robust deepfake detection. We look forward to insightful discussions and meaningful collaborations at ASRU 2025 conference! Learn more here.

  • May 2025: Im thrilled to announce that we are organizing a special session on Source Tracing for Audio Deepfake Detection at Interspeech 2025 conference - marking the first time this important and emerging topic will be featured at the interspeech conference. This session has received 18 papers and 11 high quality papers were accepted. These contribution explore cutting-edge techniques and novel approaches to identifying the origin of deepfake audio and strengthening defenses against deepfake attacks. Join us as we take a significant step forward in advancing the field of trustworthy AI . Find details here.

  • May 2025: one paper accepted in Interspeech 2025 .

  • Jan 2025: Honored to receive the Rookie of the Year 2024 award at Pindrop .

  • Jan 2025: The ASVspoof5 audio deepfake dataset and ground truths are now all freely available to the community. Check it out Dataset and paper.

  • Jan 2025: Organising a special session on Source tracing: The origins of synthetic or manipulated speech at Interspeech 2025 . Papers on relevant topics are welcome in a special session .

  • Jan 2025: Paper accepted at ICASSP 2025.

  • July 2024: Organising ASVspoof 2024 workshop as Interspeech 2024 satellite event . Registrations are open.

  • June 2024: Three papers accepted at InterSpeech 2024.

  • May 2024: ASVspoof5 detection challenge (phase2) is now started. Check it out here.

  • January 2024: I joined Pindrop as a Research Scientist! I will be working on Audio deepfake detection.

  • Nov 2023: I will be delivering a talk with Prof. Massimiliano Todisco (EURECOM) on "From Artefacts to Insights: A Topical Analysis of Voice Biometric Security" at the Joint Workshop of VoicePersonae and ASVspoof 2023, Tokyo, Japan. For more information check it out here.

  • August 2023: Our paper entitled "t-EER: Parameter-Free Tandem Evaluation of Countermeasures and Biometric Comparators", accepted in IEEE transactions on pattern analysis and machine intelligence (TPAMI) 2023.

  • August 2023: I will be offering a tutorial on "Advances in audio anti-spoofing and deepfake detection using graph neural networks and self-supervised learning" at INTERSPEECH 2023 Conference, Dublin, Ireland.

  • July 2023: ASVspoof5 is now calling for spoofed speech data contributors. Check it out here.

  • May 2023: Two papers accepted at InterSpeech 2023.