17 Dec 2021

The consideration of the pace of development of roadmaps for electronics and electrical equipment will commonly converge around Moore’s Law and even microelectromechanical systems (MEMS). In the more pragmatic sense, it seems the conversation should be around how system components, particularly power solutions, are enabling systems to take advantage of the advances brought on by Moore’s-Law-like, generational improvement in compute transistor density. 

This blog analyzes this perspective to try and characterize this gap and explain how the source-side and load-side of the analysis are not as far apart as may be perceived outwardly. 

Separating the Source & Load

When evaluating any system (or collection of systems) in terms of the power solution(s) and/or other analyses related to power consumption, energy efficiency, or overall energy/carbon footprint, it helps to separate the sources from the loads. In the simplest form, that is separating the power supplies/solutions from the end loads consuming the power these sources provide.

Think of the SOURCES and LOADS as independent black boxes that “talk” to each other. The figure below shows an arbitrary breakdown of a system in block diagram form, in this case, a computing or server-like architecture highlighted to show the difference between the typical sources and typical loads in the system.

pace of technology

This distinction of separating the sources from loads is particularly important when trying to understand the pace of technology in a complicated system that contains numerous components (perhaps each complicated system in its own right) impacted by an endless number of engineering, manufacturing, supply chain, and global economic variables.

It is no coincidence the trends of exponential improvement (whether it be a metric characterizing transistor count, feature size, power density, energy efficiency, etc.) tend to be far more associated with the load side than the source-side source side of things. The source-side components tend to be dominated by magnetics, power transistors, and energy storage. These kinds of components tend to double their key figures of merit (FOM) closer to each decade than each year like low-voltage semiconductors.

What does Moore’s Law have to do with power solutions?

Moore’s Law is more of an economic trend of transistor scaling as opposed to any kind of technical scaling rule

The consideration of the pace of development of roadmaps for electronics and electrical equipment will commonly converge around Moore’s Law, which is more of an economic trend of transistor scaling as opposed to any kind of technical scaling rule or physical law in the traditional sense.

So even if not technically tracking any of this stuff, there seems to be a general perception in the electronics industry that everything (e.g. – all components, supply chains, and engineering efforts) somehow adhere to this pace of doubling performance every 18-24 months. Of course, even the semantical definition of “performance” can be the target of much debate so that will be left to the wayside for purposes of this discussion.

transistor size/count

Aside from Moore’s Law’s impact on transistor size/count in an integrated circuit (IC), there is another trend driving major system power budget reductions.

Moore’s Law keeps logic shrinking at an exponential pace and microelectromechanical systems (MEMS) shrink and integrate sensors to the point of being nearly invisible to the naked eye. 

load power

MEMS tend to drive a major DECREASE in load power because applications will not tend to necessitate an exponential increase

It should be clearly distinguished though that Moore’s Law tends to drive a major INCREASE in load power (i.e. – per transistor, power goes down, but packing in more of them makes the power density or dissipated power in a given footprint constantly go up), where MEMS tend to drive a major DECREASE in load power because applications will not tend to necessitate an exponential increase in the number of sensors even with an exponential decrease in individual sensor power. 

On the other hand, MEMS is driving the integration of multiple sensors (and sometimes co-packaging of processing and/or communications as well).

transistor feature size

With a reduction in transistor feature size comes a reduction in threshold voltage, which effectively means ICs can operate with ever-decreasing bias voltage rails. This is why microprocessors went from requiring ~2.5/3.3V rails to ~1.2/1.5V rails, and now even <<1.0V power rails.

As mentioned, power density still tends to increase by packing in more of these lower-voltage transistors, which translates into a constant trend of driving up the input currents demanded by these dense loads. The densely packed loads also put a bigger strain on power supplies by increasing the transient demand in terms of faster voltage (~100V/ns) and current (~1,000A/µs) transitions.

How do power solutions keep up with Moore’s Law?

It is obvious how shrinking loads drive regular SWaP improvement, but less so on the source side

As highlighted in many resources regarding the design and optimization of power solutions, the most common FOMs for a system are its size, weight, and power (a.k.a. – SWaP factors) characteristics.

When combined with a cost metric, this can also be referred to as SWaP-C factors. It is obvious how shrinking loads drive regular SWaP improvement, but less so on the source side.

power solutions

In the more pragmatic sense, it seems the conversation should be around how system components, particularly power solutions are enabling systems to take advantage of the advances brought on by Moore’s-Law-like, generational improvement in compute transistor density and integration of MEMS devices.

The power solutions are not required to shrink with the low-voltage transistors or even meet their power densities at a 1:1 ratio to enable systems to utilize the evolutionary enhancements of the loads.

efficiency optimization

This highlights a major design challenge for power solutions that work to keep up pace with Moore’s Law

The increased transients described above will organically drive the need to bring the power supply closer to the high-transient load. This is not only for efficiency optimization by mitigating thermal dissipation (P=I2R) and voltage drop (V=IR), made more challenging by higher currents, but also for preventing catastrophic voltage overshoot resulting from even little bits of parasitic equivalent series inductance (ESL, 1s – 10s of nH) previous considered negligible in older generations of systems.

This highlights a major design challenge for power solutions that work to keep up the pace with Moore’s Law and MEMS by utilizing faster power switches, particularly using wide-bandgap (WBG) chemistries, such as gallium nitride (GaN), silicon carbide (SiC), gallium arsenide (GaAs), or aluminum nitride (AlN). 

advancement of high-frequency magnetics

The figure below highlights how such little ESL, from the component package alone, can have catastrophic effects on design. This is even before one has put extreme time and effort into a very clean, tight layout that contains these current flows as best as possible.

It should be noted that a lack of R&D in the advancement of high-frequency magnetics materials is currently the ultimate bottleneck in unlocking the full potential of ultra-fast switching speeds WBG power switches are capable of.

Integration and advanced packaging techniques

Integration and advanced packaging techniques are the driving force keeping power solutions on pace with their shrinking, load counterparts. Moore’s Law directly facilitates power conversion by allowing the integration of power management and control functionality into more consolidated power management ICs (PMIC), which can integrate power conversion (even integrating power switches), control logic, power conditioning, digital control and/or telemetry reporting, and management of external energy storage and feedback.

This integration of power subsystems brings discrete solutions into the IC domain to dramatically reduce board footprint space along with enhancing control and optimizing the overall efficiency of energy commutation.

integration of MEMS sensors

SWaP is improved by enabling physically smaller power supplies to concurrently provide more power output

Heterogeneous integration of MEMS sensors along with other miniaturized components such as microcontrollers, wireless radios, and antennae have led to the reduced power consumption of these loads directly as well as the savings that come with mitigating the distinct system overheads of supporting each load independently.

There mere act of supporting so many system components with such little power inherently increases the value proposition of a given power solution since the same amount of power can now support more load, but SWaP is even further improved by enabling physically smaller power supplies to concurrently provide more power output (even while supporting wider input voltage ranges).

3DPP®

Three-dimensional power packaging (3DPP®) is at the convergence of everything discussed. Even with a slower pace of improving magnetic material properties, the overall performance and size of major magnetic constituents of power solutions have seen a dramatic improvement with the migration from wire-wound (often involving manual, hand-winding techniques) to planar magnetics that use finely controlled features to layout windings and integrate into the printed circuit assembly (PCA) with embedded magnetic core material.

This enables even highly-complex magnetics structures to be integrated in a way that allows for tight process control (e.g. – increased reliability), while concurrently taking advantage of manufacturing economies of scale to check just about every box in the SWaP-C checklist of goals. The figure below is an example of a cutaway in a point-of-load (PoL) converter that integrates a control/switch IC die, power magnetics, and module packaging into a compact solution that is optimized for space and thermally-enhanced for ease of heat spreading as well.

Getting Creative, While Keeping the Pace

IPM is a “combination of hardware and software that optimizes the distribution and use of electrical power

Given the aforementioned point about the load (system) power budget on a much faster reduction trend than the increase of source (power solution) availability, the ability to keep on pace with Moore’s Law is far more attainable by putting the maniacal focus on reducing the system power budget than putting most of the engineering cycles into implementing a bigger power supply.

This is where intelligent power management (IPM) techniques shine. IPM is a “combination of hardware and software that optimizes the distribution and use of electrical power in computer systems and data centers”. This is more a frame of mind in the design approach than anything else so for instance, changing the approach to power subsystem architecture from an “always on” to an “always available” mentality can bring paradigm shifts in the results of the end solution.

energy density and cycle life FOMs

There is always a demand and use for increased energy density and cycle life FOMs for energy storage components. Like magnetics roadmaps, evolutionary improvement in (safe, practical) energy storage is closer to an order of magnitude off the pace of Moore’s Law.

Even with that being the case, it does not imply a showstopper in helping the power keep the pace of system improvement (e.g. – mostly SWaP-C) targets. 

spectrum of subjectivity

Careful characterization of loads before designing/implementing their respective power solutions

The area where this is most apparent is when sizing the power solution to the worst case (in terms of demand, transient, temperature, manufacturing tolerance, safety margin, etc.) can provide a very wide spectrum of subjectivity in terms of finding the balance between meeting the needs of the system/application and not getting too out of hand in terms of overdesign.

This point also really accentuates the importance and justification of careful characterization of loads before designing/implementing their respective power solutions.

concept of peak shaving

For instance, a system may have infrequent peaks of high-power draw, where most of the time is spent at a significantly lower, steady-state power level.

It is highly wasteful (in terms of capital and operational expenditures, a.k.a. – CAPEX/OPEX) to engineer all the system’s power supplies, upstream distribution/holdup, etc. to those infrequent peaks when that demand can be met with some localized energy storage, therefore leaving the rest of the system to be optimized for that lower steady-state. This is the concept of peak shaving, which can be applied to any system from micro-power to macro-power footprints.

utilization of load shedding/consolidation/allocation

Nothing utilizes active power more efficiently than a load operating at the peak point of the load

Another variation of IPM is the utilization of load shedding/consolidation/allocation techniques. Nothing utilizes less power than something that is off and nothing utilizes active power more efficiently than a load operating at the peak point of the load vs. efficiency curve of the upstream power solution.

So, whether it means turning off power to subsystems when not in use (i.e. – radio in a sleep state, dark silicon blocks of IC) or consolidating smaller loads that might otherwise require independent power supplies, this enables the implementation of effectively denser, more efficient power solutions.

intelligent power allocation

An example of intelligent power allocation can be in understanding a realistic scenario of external power needs, such as with universal serial bus (USB) or Power over Ethernet (PoE) ports that may be capable of sourcing more power as an aggregated peak (i.e. – all ports operating at max power concurrently), but will not all be operating concurrently and therefore should not have an upstream power supply designed to source that aggregated peak.

Furthermore, it is uncommon for all system loads to be operating at their maximums all simultaneously so creating a system power budget by simply summing the maxima (e.g. – data sheet worst-case maximums) of all loads is highly impractical in just about any use case.

optimization

If possible, then disaggregating loads in a complex system to group them enables the optimization

If possible, then disaggregating loads in a complex system to group them enables the optimization of their specific power subsystem(s) and therefore will lend itself to optimizing SWaP-C.

This allows a design engineer to take advantage of the best of both worlds (Moore’s Law/MEMS and non-Moore’s Law/MEMS direct impacts).

conclusion

No one is kidding themselves by making the bold statement that all aspects of power solutions will keep a direct pace with Moore’s Law and the advancement of MEMS (soon to be referred more often to as nanoelectromechanical systems or NEMS) devices.

As commonly depicted in industry news of the past several years, it hardly even seems clear Moore’s Law, itself, will continue (either in its existing form or something like it) into the foreseeable future. Even though this can create a gap between the source available power and the load power demand, it is not a gap that continues to grow exponentially and creates an ever-widening chasm that makes power subsystems a reason to have to scale back system functionality. 

Moore’s Law and MEMS

As discussed, there are many creative techniques that power solution designers and system engineers use to keep pace to continue taking advantage of the technology enhancements Moore’s Law and MEMS provide regularly.

IPM techniques are at the heart of this because we are being a lot smarter with every watt instead of a more rudimentary matching of the source to load in the traditional (e.g. – worst-case peak) sense. Energy storage is also a highly underappreciated and underutilized tool in the toolbox of helping to meet full system performance expectations, both reliably and while maintaining a roadmap of reduced system size and increased power density.