GPU

DDN Simplifies the AI Data Center with NVIDIA

Grazed from DDN

DataDirect Networks (DDN) today announced it is teaming with NVIDIA to transform data centers with storage and compute solutions that are fully integrated and optimized for AI and deep learning (DL) workloads. Backed by deep AI expertise in both storage and computing, the DDN A³I platform with NVIDIA DGX-1 AI supercomputers delivers a validated, pre-configured solution that enables high performance at scale, thus making it faster and easier for every enterprise to gain data-fueled insights through the power of AI and deep learning.

"Artificial intelligence and deep learning applications are creating some of the most challenging workloads in modern computing history, and are straining traditional compute, storage and network resources," said Paul Bloch, president and co-founder, DDN. "DDN A³I with NVIDIA DGX-1 is an integrated solution that provides unlimited three-dimensional scaling and improved performance as clusters grow, accelerating the end-to-end AI and DL workflow. DDN A³I with DGX-1 is driving faster iteration and, most importantly, speeding business innovation."

DDN A³I with NVIDIA DGX-1 provides a turnkey solution that delivers a true end-to-end parallel architecture providing the highest throughput, lowest latency and maximum concurrency in data delivery to applications. Easy to deploy, manage, scale and support, the joint solution, available through AI-specialized resellers, delivers immediate workflow enablement and full saturation of GPU resources, all backed by one of the world's strongest pools of deep AI expertise.

ZeroStack Delivers GPU-as-a-Service via NVIDIA Hardware

Grazed from ZeroStack

ZeroStack announced that its Self-Driving Cloud Platform can now detect and provide selective end-user access of NVIDIA GPUs to deliver GPU-as-a-Service. This makes the ZeroStack platform the first solution to offer cloud-based, fine-grained access to GPU services.

ZeroStack's GPU-as-a-Service capability gives customers powerful features to automatically detect NVIDIA GPUs hosted on multiple physical servers and make them available within the ZeroStack environment. ZeroStack cloud administrators can then configure, scale, and allow fine-grained access control of GPU resources to end users. Users can enable GPU acceleration, deploy new machine learning and deep learning workloads with tools such as TensorFlow, Caffe, etc., and provide the applications with dedicated access to multiple GPUs for order-of-magnitude faster inference latency and user responsiveness.

New Amulet Hotkey CoreStation Solutions Simplify Windows Migration and Enhance Compute Performance

Grazed from Amulet Hotkey

Amulet Hotkey Inc., a leader in design, manufacturing and system integration for remote physical and virtual workstation solutions, today announced a significant addition to the CoreStation blade family based on the Dell EMC PowerEdge FX architecture. 

For the first time, powerful NVIDIA Tesla data center GPUs can be used in the industry leading Dell EMC PowerEdge FX architecture. The Amulet Hotkey CoreStation VFC640 GPU accelerated blade server uses a unique PCIe expansion module and GPU card developed in collaboration with Dell EMC and NVIDIA product engineering teams. Two Amulet Hotkey DXF-EXP-V modules support up to eight NVIDIA Tesla P6 GPUs in the FX2 server chassis, while maintaining the benefits of the FC640 blades and FX2 architecture. The result is a powerful and agile platform that can handle a broad range of workloads.

"The Amulet Hotkey CoreStation VFC640 expands upon the market leading blade workstation portfolio designed to meet the graphics and compute performance needs of professionals while driving customers' IT transformations," said Andrew Jackson, president, Amulet Hotkey Inc. "Our unique solution enables up to eight powerful GPUs, with up to four dual-socket servers in a PowerEdge FX2s chassis. Delivering this capability in an industry standard 2U rackmount form factor demonstrates our commitment to use innovative design and manufacturing to meet enterprise IT needs for a truly flexible and scalable computing architecture." 

Rescale's Turnkey Cloud HPC Platform Now Offers NVIDIA Tesla V100 GPU With NVLink

Grazed from Rescale

Rescale, the global provider of HPC in the cloud, today announced that NVIDIA Tesla V100 GPU accelerators and NVIDIA NVLink high-speed interconnect technology are now available on Rescale's ScaleX turnkey cloud platform for AI and high performance computing (HPC). The new GPUs are hosted by Amazon Web Services (AWS) and as dedicated bare metal nodes by SkyScale, a Rescale partner. All resources are accessible on an hourly basis with Rescale's more than 200 pre-installed and pre-tuned HPC applications.

With the addition of the Tesla V100 to the ScaleX platform, Rescale users gain instant, hourly access to the fastest, most powerful GPU on the market. Based on the highly efficient NVIDIA Volta GPU architecture, the Tesla V100 is a big compute powerhouse, delivering 3x the training performance compared to its predecessor.

AWS Announces Availability of P3 Instances for Amazon EC2

Grazed from Amazon Web Services

Today, Amazon Web Services, Inc. (AWS), an Amazon.com company, announced P3 instances, the next generation of Amazon Elastic Compute Cloud (Amazon EC2) GPU instances designed for compute-intensive applications that require massive parallel floating point performance, including machine learning, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and autonomous vehicle systems. The first instances to include NVIDIA Tesla V100 GPUs, P3 instances are the most powerful GPU instances available in the cloud. To get started with P3 instances, visit https://aws.amazon.com/ec2/instance-types/p3/.

P3 instances allow customers to build and deploy advanced applications with up to 14 times better performance than previous-generation Amazon EC2 GPU compute instances, and reduce training of machine learning applications from days to hours. With up to eight NVIDIA Tesla V100 GPUs, P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance, as well as a 300 GB/s second-generation NVIDIA NVLink interconnect that enables high-speed, low-latency GPU-to-GPU communication. P3 instances also feature up to 64 vCPUs based on custom Intel Xeon E5 (Broadwell) processors, 488 GB of DRAM, and 25 Gbps of dedicated aggregate network bandwidth using the Elastic Network Adapter (ENA).

AMD works with Amazon to create virtualized graphics in the cloud

Grazed from VentureBeat. Author: Dean Takahashi

Amazon Web Services has chosen to use Advanced Micro Devices’ graphics technology to run graphics software in the cloud. The net result is that it could become a lot cheaper to process graphics-intensive apps in the cloud, rather than on a local machine.

AMD said in a blog post that it’s no secret most enterprise applications — from standard Windows productivity apps to engineering software — run better when they are accelerated by graphics processing units (GPUs). Traditionally, those apps have run on heavy-duty workstations on local machines.
 

With Volta, NVIDIA Pushes Harder into the Cloud

Grazed from Top500. Author: Michael Feldman.

Amid all the fireworks around the Volta V100 processor at the GPU Technology Conference (GTC) last week, NVIDIA also devoted a good deal of time to their new cloud offering, the NVIDIA GPU Cloud (NGC). With NGC, along with its new Volta offerings, the company is now poised to play both ends of the cloud market: as a hardware provider and as a platform-as-a service provider.

At the heart of NGC is a set of deep learning software stacks that can sit atop NVIDIA GPUs – not just the new Tesla V100, but also the P100, or even the consumer-grade Titan Xp. The stack itself is comprised of popular deep learning frameworks (Caffe, Microsoft Cognitive Toolkit, TensorFlow, Theano and Torch), NVIDIA’s deep learning libraries (cuDNN, NCCL, cuBLAS, and TensorRT), the CUDA drivers, and the OS...

New Version of Univa Grid Engine 2x Faster Than Sun Grid Engine 6.2u5

Grazed from Univa

Univa, a leading innovator of workload management products, today announced the general availably of its Grid Engine 8.5.0 product. Univa’s enterprise customers continue to push the boundaries of scalability and performance and Univa has responded. With Univa Grid Engine 8.5.0 benchmarks show an impressive 2x performance improvement in scheduling and processing of all types of workload versus previous versions of Grid Engine. Enterprises will now see immediate results with less wait time and more work completed.

In addition to scalability and performance improvements, this release includes resource map enhancements for selecting host-based resources such as GPUs and improvements to Docker device selection, allowing GPUs to be mapped into Docker containers. Users can now place GPU applications into Docker containers and run them on any Docker-enabled host in the cluster.
 

Kinetica and Nimbix Team Up to Offer GPU Computing in the Cloud for Enterprise Customers

Grazed from Kinetica and Nimbix

Kineticaprovider of the fastest, in-memory database accelerated by GPUs, today announced its real-time analytics and visualization solution is immediately available on the Nimbix Cloud.  Providing instant results and visualized insights across massive streaming datasets, Kinetica on the Nimbix Cloud can be launched in seconds and is the ideal solution for GPU-accelerated analytics.

"Kinetica on the Nimbix Cloud harnesses the power of parallel GPUs to deliver real-time analytics and data written to Kinetica is automatically routed to parallel connections across the cluster," said Amit Vij, cofounder and CEO, Kinetica.  "The full Kinetica stack can be provisioned with a couple of mouse clicks from the Nimbix console or launched and automated with JARVICE's powerful task API."

Liquidware Labs Announces the Release of Stratusphere UX 5.8.6 - Now with vGPU Monitoring Powered by NVIDIA GRID

Grazed from Liquidware Labs

Liquidware Labs, a leader in desktop transformation solutions, today announced the general availability of Stratusphere UX 5.8.6 with a number of new features, including new virtual GPU (vGPU) metrics as well as dashboard enhancements, greater security and more. The company is demonstrating the software at Citrix Summit 2016 Anaheim this week (Booth #408).

"Stratusphere UX continues to lower the barrier of entry for the diagnostics and monitoring of end-user workloads" said David Bieneman, CEO, Liquidware Labs. "The solution's ability to focus on all users, machines and applications, while defining a quantitative metric for user experience, is the defining aspect of Stratusphere. And now, with more granular vGPU metrics and enhanced dashboard views, Stratusphere UX offers greater capabilities in support of trending, optimizing and diagnosing the challenges faced in next generation end-user workspaces."