Unbabel among first AI startups to win millions of GPU training hours on EU supercomputers
The European Union has announced the winners of a “Large AI Grand Challenge” it kicked off earlier this year in a bid to accelerate the pace of homegrown innovation by large-scale AI model makers.
Four startups will share €1 million in prize money and — perhaps more importantly — eight million GPU hours to train their models on a couple of the bloc’s high performance computing (HPC) supercomputers over the next 12 months. The Commission reckons this will enable them to shrink model training times “from years to weeks”, as its PR puts it.
The winning four startups are — in alphabetical order — : French fintech Lingua Custodia, which does financial document processing using natural language processing (NLP); Belgian startup Textgain, which also uses NLP for text processing but focuses on analysis of unstructured data, such as monitoring social media chatter for hate speech; Latvian startup Tilde, another language specialist that’s focuses on Balto-Slavic languages — offering machine translation and AI-powered chatbots in the target tongues; and Portugal’s Unbabel, which has historically blended machine translation with the expertise of native human speakers — applying AI for customer service and productivity use-cases for enterprise customers.
The Commission said the AI Challenge received a total of 94 proposals.
Unbabel likely has the highest profile of the four winners. The Y Combinator-backed translation business has been around for the best part of a decade and raised close to $100M over its run, per Crunchbase.
Whether Unbabel needs an extra quarter million euros or even 2 million freebie GPU training hours is up for debate — but even veteran AI startups may feel every little helps given the fast-paced developments in generative AI over the past 1.5 years or so.
At the end of the training period, the EU expects all the winners to release their developed models under an open-source license for non-commercial use or publish their research findings.
EU supercomputers to support AI startups
The EU unveiled a plan to expand startup access to the bloc’s supercomputing hardware in president Ursula von der Leyen’s state of the union address last fall — saying at the time that it wanted “ethical and responsible AI startup” to be first in line to tap computational support.
The European High Performance Computing Joint Undertaking (aka EuroHPC JU) — to give the bloc’s supercomputer initiative its full name — currently has eight operational (nine procured) supercomputers — two of which will be providing the allocation of eight million GPU hours to the four winners: Namely, Finland-based Lumi and Italy-based Leonardo (which are both pre-exascale HPC supercomputers).
A fifth startup — Spain-based Multiverse Computing, which is focusing on trying to improve the energy efficiency and speed of large language models using “quantum-inspired tensor networks” — just missed out on any prize money but there’s a consolation: It will be allocated 800,000 computational hours on another of the supercomputers, Spain’s (pre-exascale) MareNostrum 5.
This handful of European startups building large scale AI models won’t be the first to get a taste of what HPC hardware can do. French general purpose AI model maker Mistral was a participant in an early pilot phase of the supercomputing provision last summer, using Leonardo to “run a few small experiments”, as co-founder and CEO Arthur Mensch told TechCrunch back in December — though he said it had not been used for model training at that point.
The EuroHPC JU has also historically provided some capacity to commercial players. However demand for the supercomputers typically far outstrips supply, so the AI startups are essentially getting bumped to the front of the queue.
EU policymakers have also recognized there’s a need to reconfigure and retool the HPC infrastructure for the generative AI age. Hence why, back in January, the Commission announced a package of “AI innovation” measures that included proposals for upgrading the supercomputers and building out a support layer to improve accessibility so that AI startups can more easily tap the infrastructure.