Sort results by: Published date | Company name
Showing items 1-21

Transitioning to Multicore: Part 1 [ Source: Rogue Wave Software ]

February 2014- As you think about your application in the context of new multicore systems you may wonder: What does it mean to transition my application to multicore? What is the essence of a multicore application? What are the tradeoffs involved? What are the changes you need to make?

This paper, part one of two, answers these questions, showing you how to transition an application to run on a multicore system smoothly and correctly.
<...

Focused Cooling Using Cold Aisle Containment [ Source: Emerson Network Power ]

June 2009- As heat densities and cooling costs rise, data center professionals are looking for more efficient cooling solutions. This paper discusses the benefits of using both the hot and cold aisle approach, but focuses on the latter option as an effective method for eliminating data center hot spots and reducing overall cooling costs by as much as 30%.

Discover how cold aisle containment, as an addition to a conventional precision cooling system, consistently separates cold ...

Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems [ Source: Emerson Network Power ]

August 2009- This paper introduces the concept of the “cascade effect” and describes how it can be used to cut data center energy costs by 50% or more. Emerson Network Power came up with this vendor-neutral roadmap for optimizing data center efficiency, called Energy Logic, by applying the top ten energy-saving opportunities to a 5,000-square-foot data center model based on real-world technologies and operating parameters.

Outlining this sequential approach to reducing energy costs, this paper includes ...

Panasas Scalable Storage: Data Management Challenges [ Source: Panasas ]

March 2010- As the calendar turned to 2010, many reflected on the changes that occurred over the past 10 years. Looking back on the life sciences, the biggest event was the sequencing of the human genome, which caused a data explosion that continues today. Specifically, the growing use of sequencing and medical imaging is producing a constant increase in the volume of life sciences data that must be used to develop new drugs. This growth is forcing most life ...

Strategies for ensuring clean, continuous power to essential IT systems [ Source: Eaton ]

September 2009- Data centers rely on a continuous supply of clean electricity. However, anything from a subtle power system design flaw to a failure in the electrical grid can easily bring down even the most modern and sophisticated data center.

Fortunately, organizations can significantly mitigate their exposure to power-related downtime by adopting proven changes to their business processes and electrical power system management practices.

This white paper discusses 10 such underutilized best practices for ...

Power Monitoring 101: Supervisory, Connectivity and Protection Options That Add An Umbrella of Protection Over Your Entire IT Infrastructure [ Source: Eaton ]

August 2009- Monitoring options are now available for organizations of any size. You can remotely monitor and manage a single uninterruptible power system (UPS), an enterprise-wide network of many UPSs and power distribution devices, or a complete IT support infrastructure, including generators, environmental systems and detection devices, and other components from multiple vendors.

This white paper discusses the imperatives of intelligent power monitoring.

Calculating and Prioritizing Your Data Center IT Efficiency Actions [ Source: Emerson Network Power ]

July 2009- The lack of a true data center efficiency metric is challenging to IT and data center managers as they try to justify much needed IT investments to management. To help quantify data center efficiency, Emerson Network Power introduces in this paper CUPS, or Compute Units per Second. Similar to miles-per-gallon metric in the automobile industry, CUPS is meant to represent the “performance” of the data center.

Using the Energy Logic analysis and the ...

HPC Management Software: Reducing the Complexity of HPC Cluster and Grid Resources [ Source: Platform Computing ]

May 2008- This white paper primarily focuses on the expanding role of HPC management software in the HPC marketplace. The paper emphasizes its historical development and considers its business case and future prospects while dispensing with more elaborate technical analysis and product comparisons.

This white paper reviews the rise of clusters to dominate the HPC market and points out the elevating effect it has had on the role of the software in use between the ...

GPFS Multicluster With the IBM System Blue Gene Solution and eHPS Clusters [ Source: IBM ]

January 2008- This paper describes a case study in which an IBM System Blue Gene Solution supercomputer is configured to natively access General Parallel File System (GPFS) 2.3 file systems that are owned by an IBM eServer pSeries cluster with AIX 5L and the IBM eServer High Performance Switch (eHPS). The IBM System Blue Gene Solution Service Node (SN), Front-End Nodes (FENs), and I/O Nodes (IONs) are configured as one GPFS cluster that does not contain any ...

Optimizing Mirroring Performance Using HACMP/XD for Geographic LVM [ Source: IBM ]

January 2008- The mirroring of data between two remote sites has become significantly easier with the arrival of the new HACMP/XD for Geographic Logical Volume Manager (GLVM) solution. As an IP-based solution, GLVM sites can span an almost unlimited distance. In GLVM, the AIX 5L LVM (Logical Volume Manager) itself is responsible for mirroring the data; thus several complex and time consuming manual tasks have been eliminated. The configuration and management of GLVM is therefore simpler ...

High-Performance Server Systems and the Next Generation of Online Games [ Source: IBM ]

January 2008- It is clear that from the point of view of theoretical maximum performance, the Cell BE processor’s low-latency local store with bandwidth-efficient DMA and manually managed memory latency offers significant advantages. However, from a software-engineering perspective, the impact of porting some types of legacy game software is daunting. It is not enough to identify computationally expensive sub-processes and fit them onto the SPEs; the data which supports those sub-processes must be stored in a ...

Provisioning and Patching Your Oracle Environment With Oracle Enterprise Manager 10g [ Source: Oracle ]

January 2008- The Oracle Grid offers a proven solution that allows businesses to heighten application performance and deliver unparalleled IT infrastructure reliability. Grid Computing offers Oracle customers the ability to scale out their infrastructure as computing demand increases over time, to relocate and activate resources easily, and to ensure that deployed software is ’Locked down’ based on compliance rules. While these benefits of Grid Computing are well acknowledged, they also demand that enterprises should have the right ...

High Performance Object Tracking System Using Active Cameras [ Source: Microsoft ]

January 2008- This white paper describes a high performance object tracking system for obtaining high quality images of a moving object at video rate by controlling a pair of active cameras mounted on two motor droved pan-tilt units. For tracking object on image space, a pixel clustering based algorithm called ""K-means tracker"" is proposed that makes use of both target and non-target information for fast object tracking. The PID control scheme is employed for controlling the angular ...

Migration From Platform Computing’s Load Sharing Facility to Sun N1 Grid Engine Software [ Source: Sun Microsystems ]

January 2008- This white paper presents a case study covering Transmeta’s migration process from Platform Computing’s Load Sharing Facility (LSF 4.0.1) to Sun N1 Grid Engine 6 software (N1GE6). Transmeta initially started experimenting with the open source version of Grid Engine (SGE5.3.x) for a period of time and subsequently decided to upgrade to the latest Sun N1GE6 product. Both LSF and N1GE6 are excellent Distributed Resource Management systems that greatly increase job throughput. ...

Blue Gene/L Torus Interconnection Network [ Source: IBM ]

January 2008- One of the most important features of an extraordinarily parallel supercomputer is the network that connects the processors together and allows the machine to operate as a large coherent entity. In Blue Gene/L (BG/L), the primary network for point-to-point messaging is a three-dimensional (3D) torus network. The main traverse of the massively parallel Blue Genet/L is a three-dimensional torus network with dynamic virtual cut-through routing. This paper describes both the architecture and ...

Overview of the Blue Gene/L System Architecture [ Source: IBM ]

January 2008- The Blue Genet/L computer is a massively analogous supercomputer based on IBM system-on-a-chip technology. It is designed to scale to 65,536 dual-processor nodes, with a peak performance of 360 teraflops. This paper describes the project objectives and provides a synopsis of the system architecture that resulted. The paper discusses the application-based approach and rationale for a low-power, highly integrated design. The key architectural features of Blue Gene/L are introduced in this paper.

Firmware-Based Platform Reliability [ Source: Intel ]

January 2008- With high performance computing clusters being built from thousands of commodity systems, and with traditionally high performance computing workloads being targeted at commodity platforms, reliability is not to be taken for granted. These features will bring added value to mainstream platforms, lower system management costs, and help to overcome the reliability challenges introduced by system architecture and semiconductor physics trends.

Enterprise Manager Grid Control Performance Best Practices [ Source: Oracle ]

January 2008- The Grid Control (10g) version of EM is no exception, and in fact sets several precedents in terms of added built-in functionality as well as the ability to scale for hundreds of users and thousands of systems/services on a single EM implementation. A large part of the Grid Control development focus was placed on minimizing EM’s resource utilization, all the while adding more built-in, and high-volume data processing. The purpose of this paper ...

High-Performance Computing Solution to Optimize Product Design [ Source: Hewlett-Packard ]

January 2008- Design Chain Accelerator (DCA) improves product development by delivering superior high-performance product development solutions for manufacturers, such as those in the automotive and aerospace industries. Streamlining the manufacturing process with precision and reliability, DCA helps companies develop products that sell based on sound design principles. It reduces costly design errors by helping solve the most extreme engineering simulation problems, including Computational Fluid Dynamics (CFD), crash, and structural simulations, before committing the design to manufacturing.

ACRES Architecture and Compilation [ Source: Hewlett-Packard ]

January 2008- High-performance computing engines often provide product-defining functionality within consumer devices. These devices are traditionally implemented using either ASICs or embedded processors. ASICS are inflexible and require high design cost while embedded processors provide inadequate compute power and efficiency for specialized applications. This work describes the ACRES platform that combines the flexibility of a programmable technology and the efficiency of custom hardware without incurring high-cost, high- risk chip development. The ACRES platform consists of programmable computing ...

Using a High-Performance Computing Solution to Optimize Product Design [ Source: Hewlett-Packard ]

January 2008- Optimizing a product design early on eliminates the expense of building and testing multiple prototypes and reduces changes late in the manufacturing cycle. Design Chain Accelerator (DCA) is a high-performance computing solution for enhancing product-development processes in industries such as aerospace, automotive, and general manufacturing. Complex systems in disciplines such as computational fluid dynamics (CFD), structural analysis, and crashes can be simulated before expensive prototyping begins?allowing cost-effective optimization of product designs. For products involving ...