Optimize for size and cost of a variable frequency drive power supply

One of the products my company designs is, essentially, common bus power supplies for variable frequency drives (VFDs). Thus far we have only sold into the United States. We are considering selling into Europe. CE standards will require us to meet various requirements: construction safety, noise immunity, radiated noise, conducted noise, and RoHS. I don’t believe we’ll need to change much about our designs for most of those issues. Conducted noise, though, looks like it will require some additional components. I need to figure out what those components are before we can proceed with standards evaluation.

I’m familiar with a number of VFDs from different manufacturers, as well as some other common bus power supplies. All I’ve seen share a very similar EMC filter design, including two optional protective earth (PE) jumpers that can be added or removed as desired. The below is my theory as to the nature and purpose of these filters. If there’s a flaw in my understanding, it will obviously affect the answer to my ultimate question!

This represents a variable frequency drive being fed from a grounded-neutral transformer, with a three-phase reactor between the two. FETs are used because there’s no symbol for an IGBT in the schematic editor.

simulate this circuit – Schematic created using CircuitLab

When the IGBTs switch, the switching edge has a frequency content into the megahertz, which means parasitic capacitances start to matter. I’ll represent those with a single capacitance from DC- to earth, though of course the capacitance is distributed along the motor leads, housing, and every other component in the system.

in actual VFDs, these capacitors are typically arranged as seen here. I do not believe the arrangement matters for the purposes of this question, as long as the line-to-neutral capacitance is achieved. Also, these capacitors must have a Y-class safety rating.

Since each phase of the transformer is a source, current must flow out one end of the winding, then back to the grounded neutral. With the parasitic capacitance in place, there are two paths: in one AC input of the VFD and out another, or in one AC input and out through the parasitic capacitance. Naturally, current follows all paths, in proportion to their impedance. At high frequencies, the parasitic capacitance is a much lower impedance than the largely inductive path through the AC input. Essentially, we have a current divider, and the capacitor takes much more current than the other path at high frequencies.

Current flow through the parasitic capacitance has negative effects. In physical reality, this is current flow through every grounded object close to the VFD, motor leads, and motor. That basically turns the entire system into a giant antenna broadcasting the frequency content of the switching edge, not to mention possibly messing up other ground references nearby. There may be other bad effects I don’t understand, as well.

We can not eliminate this parasitic capacitance. Nor can we substantially reduce the frequency content of the switching edge (though we can slow down the IGBT switching to some degree). What we can do is alter the impedance ratios and reduce the amount of current flowing through the parasitic capacitance.

First, we add a common mode choke. This can go either on the DC bus or the AC input, but earlier in the power flow is probably better than later. I’ll show mine on the DC bus for ease of drawing.

An optimal common mode choke appears as zero impedance to any current flowing symmetrically through the device. Instead it appears as a high inductance to any differential currents. This device has increased the inductance of the path through the parasitic capacitance, increasing its high-frequency impedance and reducing the current flow through this path.

Second, we add capacitors from the AC line to earth.

These capacitors provide the high-frequency switching currents a path to earth that is not blocked by the common mode choke. My capacitor choice is here:http://www.kynix.com/Detail/5/F17724102900.html.The total impedance of that path to earth is now much lower than that of the parasitic capacitance. Unlike the parasitic capacitance, these are physical discrete capacitors tied directly to a ground wire. The switching currents are contained to a defined path, rather than polluting the grounds of the entire electrical neighborhood.

Since the common-mode choke has leakage inductance, it does add some high-frequency impedance to the path through the AC line filter capacitors. Thus we add our third filter component, capacitors on the DC bus downstream of the common mode choke.

These capacitors have the same additional differential-mode impedance as the parasitic capacitance, but their value is much higher than that of the parasitic impedance, and thus their impedance will be lower. Like the AC line capacitors, these must also be Y-rated.

All this I understand, or at least think I do. But how does one appropriately size these components?

It seems to me that the best scenario is obvious. First, make the filter capacitors as large as possible, giving the minimum high-frequency impedance. We need Y-rated capacitors with the appropriate AC and DC voltage ratings necessary, which for an individual capacitor puts us in the <1uF range. Probably film, though there are some Y-rated ceramic caps. But we can parallel as many capacitors as we like. How do I know when to stop?

Second, make the common-mode choke have as high a differential impedance and as low a leakage inductance as possible. I don’t know terribly much about winding inductors, but this seems to mean using a large core with minimal turns. But again, we can get or assemble arbitrarily large cores.

Obviously there’s some acceptable minimum to these component values, allowing us to optimize for size and cost. How do I figure out what it is, and know when I’ve reached it? And is there in fact a maximum acceptable value for any of these filter components?

Funny that a question of this kind shows up in a forum like this…

even funnier that you are really reaching somebody giving you a however limited answer…

Well my company is producing devices for the EU markets.
I don’t have any knowledge about power electronics and EMI issues in general, but we usually deal with EMC and i figure that that is quite the other side of the coin.

Usually before we hand in a product for certification we do some precompliance tests, those sometimes don’t even take place at an accredited test site to save costs.
During this tests basically a test engineer is running your device though all the test from the standards you would have liked to be tested.
If there is an issue at one of the tests you can make repairs and adjustments to component dimensions in order to improve the performance.
Usually if the Emission would be around 3dB below the limit you would stop and call it a day.
Depending on the adjustments you had to make there might be the need for a PCB redesign (you needed bigger caps than are placeable on the current layout).

Then after this adjustmends are put into the series design you would either do a new preliminary test to make sure everything still works, or if the changes were only minor you would directly go for the certification test.

For your question with the maximum acceplable filter component values, usually your procurement team tells you if the product is now too expensive to sell.

This are my 2ct so far…