The “great debate” about power efficiency vs power density is one of those classic engineering arguments that can quickly get lost in the weeds. It’s also closely tied to the theory of diminishing returns.
Case in point: device and module manufacturers spend seemingly endless hours (and money) to show their products have optimal power conversion capability. In the ultra-competitive business world, a 2% efficiency increase in power conversion technology makes all the difference for marketing campaigns, product specifications, and, ultimately, bottom line results.
The end users, however, look at the equation through a different lens. Sure, they’re all for a relatively minimal boost in efficiency, but what matters most is bottom-line results – specifically, whether their processes will be more productive and profitable or not. If power dense electronic components help achieve this goal, that’s fine. The same can be said for efficient power components.
Usually, a combination of both power dense and power efficient applications and processes help determine overall product performance and, in turn, revenue and ROI.
Which type of component should your design team focus on implementing for your next big project? Should you implement more power dense parts, or go with the efficiency angle? Talk to ten different engineering groups, and you’re likely to get a 50/50 split down the middle. Here’s our take on the density/efficiency topic.
Power Density and Power Efficiency: Basic Definitions
Simply put, power density refers to the power in any given mass. Ultra-compact capacitors and similar electronic components are excellent examples of “power dense” accessories.
Power efficiency, meanwhile, is all about input and output. The most efficient power distribution systems, for example, take in a minimal amount of energy to produce a maximum amount of energy. Think of an efficient power plant as a practical example of a power efficient system. Nuclear power plants are known for their power efficiency; if not for a few significant negative aspects associated with them (nuclear waste, industrial accidents, radiation, etc.), nuclear power plants would be much more popular in the United States. Other countries, like Russia and France, enjoy nuclear energy’s power efficient properties on a larger per capita basis.
So how do these two important performance factors figure into everyday design applications? Let’s take a tour to one of the most commonplace facilities – a data center – to find out.
The Heat Is On: Temperatures and Power Converter Efficiency
Keeping cool under pressure is a well-known saying, but it’s a way of life for engineers and designers involved with data centers, controls systems, and other heat-sensitive environments. The requirement for cooling fans and another temperature-controlled factors takes up precious man-hours and resources in overall design considerations.
But does it have to be this way? Consider another popular adage: for each 10° C increase in temperature, most electronic devices have a shortened lifespan up to 200%. Thus, it literally pays to optimize power density and power efficiency, especially when rising temperatures can significantly reduce the maintenance schedule for key system components.
Thanks to ultra-durable design attributes and ever-improving heat capable performance characteristics, most electronic devices (capacitors, diodes, switches, etc.) don’t require the recommended 20-22° C environment to meet minimum engineering expectations. In fact, allowing your data center and associated electronic components to run a bit warmer – even up to 32° C –doesn’t hamper overall system performance.
Smaller case sizes for power supplies are recommended, along with a slightly elevated data center temperature. While it’s easy to get bogged down in minute power density and power efficiency calculations, here’s the bottom line: by allowing your system to run hotter, you’re drastically reducing cooling costs and resources, all while maintaining total system integrity.