Penguin is targeting users in industries such as manufacturing, finance, and image processing who are inexperienced with GPU-based technology.
Hoping to make graphical processing unit- (GPU) based computing appealing to a wider audience, Penguin Computing has unveiled two pre-configured Nvidia Tesla and AMD-based clusters.
Penguin will aim the out-of-the-box offerings not just to technical researchers but to industries such as manufacturing, finance, and image processing.
The intent is to offer users inexperienced with GPU-based technology a way to quickly set up and configure systems that can be tailored to their respective applications and environments. All of the systems' components are integrated at the factory.
"With these systems it is now possible for IT managers to order one of these pre-configured clusters, and then work with us to customize them to their application set," said Sumit Gupta, senior product manager of GPU computing at Nvidia.
Priced at $44, 985, the new Altus 1702 cluster has four twin compute nodes, four Nvidia Tesla S1070 GPU Computing Systems, and gigabit-speed Ethernet, and it comes with Penguin's Scyld management software.
The system can supply a little over 16 teraflops of computing power, and can be placed in a rack holding nine processors. Users can buy a cluster option that expands the basic configuration to eight Altus 1702 compute nodes and eight Tesla GPU Computing Systems capable of 32 teraflops performance for $89,000.
Both systems contain AMD Opteron 2376 CPU's with 8 GB per node, a 24-port Gigabit Ethernet switch and a Scyld subscription for each node. Each Tesla S1070 GPU system has four graphical processors with each one delivering a teraflop of performance.
InformationWeek Analytics has published an independent analysis of the challenges around virtualization management. Download the report here (registration required).
Join us for a roundup of the top stories on InformationWeek.com for the week of December 14, 2014. Be here for the show and for the incredible Friday Afternoon Conversation that runs beside the program.