The 5 biggest revelations from Blake Lively's complaint against Justin Baldoni
- yesterday, 5:55 PM
- businessinsider.com
- 0
The AI boom has brought renewed attention to graphics processing units, the high-powered computing chips used to power modern generative AI and other machine learning applications.
And a startup called Voltron Data is harnessing some of those same GPUs to efficiently handle massive data analytics tasks, like combing through huge server logs for cybersecurity purposes, analyzing enormous sets of financial data, or processing telemetry data from complex systems like autonomous cars. At large enough scale, starting around 30 terabytes of data, database queries on traditional computing chips can start to bottleneck, with processing time no longer going down in a linear fashion as you add more computing power, says Voltron Data cofounder and field CTO Rodrigo Aramburu.
“When things get really big, they get really weird,” he says. “The unit economics basically break down.”
But, he says, modern GPUs from Nvidia and the servers that house them are designed to quickly shift large volumes of data onto those processing chips, faster than traditional CPU-based systems. GPUs can also quickly process the mathematical operations needed to search and sort through data or combine data from multiple large tables, making database operations efficient even when data sets are mindbogglingly large.
“It’s all of these little things that just add up,” says Aramburu. “From a database operation perspective, if you know how to take advantage of them, and you build something from the ground up to take advantage of all of this hardware acceleration, you’re going to get really, really good performance from the GPU.”
Companies using Voltron Data’s software, called Theseus, have been able to replace fleets of CPU-powered systems with vastly smaller numbers of GPU-centric servers, he says, sometimes substituting as few as one GPU-powered server for every 100 previously in use. That can substantially cut the amount of energy and real estate they require for handling big data—even as data processing tasks run more quickly.
One large retail client had a nightly process designed to predict sales and optimize volumes of perishable goods sent to individual stores. On a CPU-based system, the process typically took almost eight hours to run. With Voltron Data’s GPU-powered architecture, it could run in just 25 minutes. The speedup was a relief to the company, which previously had little margin if anything stalled the process in the eight or so hours it had to run each night, and it also gave developers the ability to test different versions of the code, enabling them to make its predictions more accurate and get more goods to places where they’d be sold.
“They were able to iterate on it enough times that they reasonably impacted the model accuracy by a few percentage points,” he says.
Voltron Data’s own engineers need to know the details of how to optimize data processing tasks for GPUs, but developers and database engineers at the company’s customers can continue to write queries using SQL, the standard database language, or data frame processing libraries in various programming languages. They can also continue to store data in standard data lake environments and common formats, with Voltron Data software processing it essentially as is when the time comes to run queries.
Voltron Data doesn’t host its own cloud systems—instead, customers install the software on computers they already control, either in their own data centers or an existing cloud network. In many cases, they’re able to take advantage of largely idle GPU servers they’ve already acquired for other purposes, like development with previous generations of AI tools, Aramburu says.
The company’s systems aren’t necessary for every business, he freely admits, or for every task. For smaller data sets, traditional database tools still make sense, especially when they’re used efficiently by programmers who know how to optimize performance. But for organizations dealing with data at the right scale, the advantages start to be apparent.
“All those algorithmic tricks that we can engage in start losing their utility, and being able to just brute force process all the data as quickly as possible becomes necessary,” he says. “That’s where an engine like ours, which is able to leverage all this advanced hardware, can just churn through it much faster.”
No comments