We are happy to release the OpenPII English Anonymiser βthe most powerful open-source tool for redacting sensitive info from English text.
Fine-tuned Modernbert on 5.7 million+ PII examples, itβs clocking 99%+ accuracy across emails, dates, social numbers, and more!
Why itβs a big deal: β Top-tier precision: 100% for passport numbers, 99.96% for emails*. β Totally free: MIT license for personal or commercial use. β No secrets: Full metrics shared on Hugging Face.
Excited to share insights about LinkedIn's innovative approach to content search, recently detailed in a groundbreaking paper by their Mountain View team. This advancement represents a significant shift from traditional keyword-based search to semantic understanding.
>> Technical Architecture
The new search engine employs a sophisticated two-layer architecture:
Retrieval Layer - Token Based Retriever (TBR) for exact keyword matching - Embedding Based Retriever (EBR) using a two-tower model with multilingual-e5 embeddings - Pre-computed post embeddings stored in a dedicated embedding store for efficient retrieval
Multi-Stage Ranking - L1 Stage: Initial filtering using a lightweight model - L2 Stage: Advanced ranking with complex features including: - Query-post semantic matching - Author reputation analysis - User engagement metrics - Content freshness evaluation
>> Performance Improvements
The system has achieved remarkable results: - 10%+ improvement in both on-topic rate and long-dwell metrics - Enhanced ability to handle complex natural language queries - Significant boost in sitewide engagement
This advancement enables LinkedIn to better serve complex queries like "how to ask for a raise?" while maintaining high performance at scale. The system intelligently balances between exact keyword matching and semantic understanding, ensuring optimal results for both navigational and conceptual searches.
What impresses me most is how the team solved the scale challenge - processing billions of posts efficiently using pre-computed embeddings and approximate nearest neighbor search. This is enterprise-scale AI at its finest.
π Announcing Global-MMLU: an improved MMLU Open dataset with evaluation coverage across 42 languages, built with Argilla and the Hugging Face community.
π·οΈ +200 contributors used Argilla MMLU questions where regional, dialect, or cultural knowledge was required to answer correctly. 85% of the questions required Western-centric knowledge!
Thanks to this annotation process, the open dataset contains two subsets:
1. π½ Culturally Agnostic: no specific regional, cultural knowledge is required. 2. βοΈ Culturally Sensitive: requires dialect, cultural knowledge or geographic knowledge to answer correctly.
Moreover, we provide high quality translations of 25 out of 42 languages, thanks again to the community and professional annotators leveraging Argilla on the Hub.
I hope this will ensure a better understanding of the limitations and challenges for making open AI useful for many languages.
I'm currently on a push to expand the scope of image based datasets on the Hub. There's certainly a lot already, but for anyone who's looked closely, there's not a whole lot of standardization. I am to fix that, datasets under the
pixparse orgs will serve as canonical examples for various task / modality combinations and be useable without fuss in libraries like timm, OpenCLIP, and hopefully more.
I just uploaded the first multi-label dataset that I'll support with timm scripts soon: timm/plant-pathology-2021
Next up object detection & segmentation! I've got an annotation spec sorted out, a lot of datasets ready to rip, and yeah that means timm support for object detection, eventually segmentation, is finally under development :O
OmniVision-968M: a new local VLM for edge devices, fast & small but performant π¨ a new vision language model with 9x less image tokens, super efficient π aligned with DPO for reducing hallucinations β‘οΈ Apache 2.0 license π₯
In August, the XetHub team joined Hugging Face - https://huggingface.co/blog/xethub-joins-hf - and weβve been rolling up our sleeves to bring the best of both worlds together. We started with a deep dive into the current state of files stored with Git LFS on the Hub.
Getting this information was no small feat. We had to: * Analyze a complete database dump of all repositories and files stored in Git LFS across Hugging Face. * Parse through metadata on file sizes and types to accurately map the storage breakdown across Spaces, Models, and Datasets.
This isnβt a goal of ours because we have plenty of money in the bank but quite excited to see that @huggingfaceis profitable these days, with 220 team members and most of our platform being free (like model hosting) and open-source for the community!
Especially noteworthy at a time when most AI startups wouldnβt survive a year or two without VC money. Yay!