Energizing MOSFET gate with arduino

I have a very high power application drawing deep into the 200+ amp regime and it's pulsed using a very beefy and expensive mosfet by IXFN, rated at 310 amps. My concern is that with such a large IC and presumably a larger gate, is the voltage on the arduino's digital pin, enough to energize it? Is there enough charge delivered fast enough to turn the gate on? I never considered such things with smaller devices but with a big honkin MOSFET the size of a silver dollar, I'm thinking it might be a problem. Might I need a 2-stage setup where I use a small mosfet and power supply to drive the gate of the larger one?

Your MOSFET will not completely switch through with the output voltage of an Arduino Uno.

High current MOSFETs need gate voltages in the 6 - 10 Volts region.

If you are lucky, your MOSFET will not switch at all. If you are unlucky and you will end up in the MOSFETs linear region, where the impedance is several Ohms. This will overheat your MOSFET immediately.

I have attached a circuit that will solve your problem. The transistor can be almost any NPN type. The resistor values are not critical either.

A MOSFETs gate is voltage controlled and not current controlled like transistors.

Next time please tell us which MOSFET you have got. I would have looked it up in the datasheet.

If you need really fast switching you might replace the 3K3 base resistor wit 2K2 and the 10K resistor with a 4K7 resistor. This should be low enough to compensate for the internal capacitance of the transistor and MOSFET. However, I do not believe that you want to switch 200 Amperes at 100 KHz or higher.

And do not forget, the transistor stage will change your signals. A HIGH output at the Arduino will turn into a LOW signal.

Sorry, I don't have the chip in my hand to check the number BUT I think it's this one. If it isn't, it's really close to this one:

I specifically picked the chip to have a logic level gate threshold so there's no need for higher voltage than the arduino can generate. The only issue is whether the charge provided by the arduino is sufficient to activate the gate.

The chip above says 600 nC. I know what a coulomb is but how do I know how many coulombs an arduino digital pin can provide in a given time interval? Plus, as the charge ramps, it will put the fet through the linear regime for a short time, which could be a problem for thermal dissipation.

On second thought... If you short-circuit a digital pin, we already know it can at least dump 40mA right?

40mA is 40000nA = 40000 nC/sec

40000 / 600 = 67 so 600nC can be delivered in 1/67 of a second.

1/67 = 15mS to turn the gate on

This limits switching frequency to something pathetic like 60 Hz.

This is all based on the assumption that short-circuiting an Arduino pin will only cause a 40mA spike, which is probably not the case. It can probably dump a higher current transient than that without burning the internals. I just don't know what that is or what's safe.

Please read the last line on page 1 of the link you gave me.

RDS(on) V GS = 10 V, ID = 100A

So for 100 A you will need a gate voltage of 10 Volts.

The datasheet is quite incomplete, not showing a diagram with the relationship between gate voltage and current flow.

Maybe for 200 Amperes you will neeed more than 10 Volts. My circuit will give you a maximum of 12 Volts. Use a 15V power supply if you want to be on the safe side.

So in case the gate is drawing more current that I expected, replace the resistor between the 12V supply and the transistors collector with a 1K resistor (12mA at 12V when the transistor is ON). A 1/4 Watt resistor will do the job.

I may end up with no choice but to employ such a schematic (not that I don't want to but it increases system complexity and possible failure modes).

I do believe that the line you are pointing me to is just an arbitrary set of operating conditions they used to determine RDS. From my memory of the graph, which is indeed missing, you didn't need the full VGS voltage limit to get past the linear zone. I think 5 volts would suffice but the more pressing matter is whether or not I can switch at a sufficiently high frequency to convert my PWM into a variable DC voltage while simultaneously activating the gate fully with each pulse interval. I think the safer move is with the auxiliary circuit, as you said.

You are using a "very beefy and expensive" MOSFET and you are pushing 200+ amps through it. What on earth prevents you from throwing in a MOSFET driver IC for a few cents to do a proper job?

Nothing at all. As I said earlier, I'm thinking with my engineering cap on. Having circuits relying on circuits just means you are introducing failure modes and when you're playing with 200+ amps, this is dangerous and potentially stupid.

My goal is to switch 200 amps using clean PWM and then filter it into DC. The filtering alone slows down the response time so adding a pre-amp circuit to the MOSFET may add more noise and slow down responsiveness further. And if something goes wrong with that circuit, the main FET may get stuck in the on position, which would cause a large lithium fire in my trunk and explode the gas tank.

... then again it may not.

The point is, if I don't have to do it, I don't want to do it. Simpler is always better.

For high power power-electronics simpler is not better. Lots of carefully designed and tested protection
circuitry is better. Unless you want explosion/fire to be the only failure modes. (As opposed to orderly
shutdown)

That IXYS MOSFET has a total gate charge of 0.6uC at 10V (but you'd drive it at 12V of course, so
more like 0.7uC).

To switch it in 1us takes 0.7A, which sounds reasonable. A MIC4422 will drive that without sweating.

The device has a terminal current limit of 100A, note, so whatever the actual die can take, that
device is rated at 100A max continuous, at which it will dissipate 30W on and have 0.3V Vsat.

[ Driving it at high current loads from an Arduino pin will demonstrate just what havoc 5nF of Crss can
do to a uController! Even ignoring the fact it needs 10V or more gate drive... ]

To drive a big mosfet with a gate capacitance of maybe 5nF, you need a driver capable of sourcing and sinking 1A or more .

This is to make the switching transient short enough to avoid high dissipation during the switching period.

Hence ArduinoAle's circuit, using a 10k pullup , which would only deliver a maximum of a bit over 1mA , is dismally inadequate.

< 10 ohms would be nearer the mark (?!).

But......

There's a better way.....

Many manufacturers make specialist devices for exactly this job - look at eg IR's range.

Or perhaps your mosfet manufacturer recommends a suitable driver - if so use that.

regards

Allan

I think by this point it's clear to me that I will need a driver circuit. I have driven small TO-220 mosfets (~50A range) directly from the arduino without an issue but at this scale, it's clearly not up to the task. Thanks for the part number suggestions. I'll look into it. I still maintain that less is more in any design so long as what you have does the job. In this case it doesn't so the driver circuit is "necessary complexity". No complaints. I will probably also add a couple of parallel pulldowns to the gate so that it can never float on when everything is powered down.

Hi Gahhhrrrlic. Roughly what switching frequency are you aiming for?

Highest possible so I can use a smaller capacitor but it doesn't really matter. I'm only switching to make DC so higher is better so long as the transistor can cope.

allanhurst:
To drive a big mosfet with a gate capacitance of maybe 5nF, you need a driver capable of sourcing and sinking 1A or more .

5nF is not the gate capacitance, this thing has 0.7uC of gate charge, that's 70nF. I was talking about Crss,
the drain-gate feedback (Miller) capacitance.

This is to make the switching transient short enough to avoid high dissipation during the switching period.

Hence ArduinoAle's circuit, using a 10k pullup , which would only deliver a maximum of a bit over 1mA , is dismally inadequate.

Yes, that circuit goes BANG very fast indeed!

< 10 ohms would be nearer the mark (?!).

But......

There's a better way.....

Many manufacturers make specialist devices for exactly this job - look at eg IR's range.

Or perhaps your mosfet manufacturer recommends a suitable driver - if so use that.

regards

Allan

i picked the MIC4422 as an example because its big enough for any device (many amps output),
and is available in through-hole package. However its only a low-side driver, I don't think we know
what topology is needed?

Gahhhrrrlic:
I think by this point it's clear to me that I will need a driver circuit. I have driven small TO-220 mosfets (~50A range) directly from the arduino without an issue but at this scale, it's clearly not up to the task. Thanks for the part number suggestions. I'll look into it. I still maintain that less is more in any design so long as what you have does the job. In this case it doesn't so the driver circuit is "necessary complexity". No complaints. I will probably also add a couple of parallel pulldowns to the gate so that it can never float on when everything is powered down.

MOSFET driver chip circuits are far simpler than doing the same job in discrete components, imagine
having to implement high current push-pull driver, under-voltage detection, and maybe charge pump
and dead-time handling yourself...

There are perhaps >2000 MOSFET and IGBT driver chips out there, its a very common device
used everywhere, the choice is bewildering!

Gahhhrrrlic:
I may end up with no choice but to employ such a schematic (not that I don't want to but it increases system complexity and possible failure modes).

I do believe that the line you are pointing me to is just an arbitrary set of operating conditions they used to determine RDS. From my memory of the graph, which is indeed missing, you didn't need the full VGS voltage limit to get past the linear zone. I think 5 volts would suffice but the more pressing matter is whether or not I can switch at a sufficiently high frequency to convert my PWM into a variable DC voltage while simultaneously activating the gate fully with each pulse interval. I think the safer move is with the auxiliary circuit, as you said.

No, you drive the device at >= 10V if you want it to work. There is significant gate threshold variation
between devices and over a device lifetime (ion migration in the gate oxide), you cannot assume that
because one device you've tested works at a lower voltage that this is true for any device over its life -
the manufacturer has done all the accelerated lifetime testing and device spread characterization to claim
the value of Rds(on) they do.

Quick recap on gate voltages:

threshold voltage - the point the device goes from very off to starting to leak a few microamps or
milliamps (depends on the size of the device).

plateau voltage - the gate voltage at which the channel is forming, ie the device is actually switching,
and most of the gate charge builds up (it mirrors the channel charge).

on-voltage - the gate voltage you use for reliable operation.

Typically a non-logic-level FET will have:

threshold = 2 to 4V
plateau 5 to 8V (depends on load current, note)
on-voltage 10 to12V

Having the on voltage about twice the plateau leads to symmetrical switch-on-switch-off performance
and is generally best.

Gahhhrrrlic:
Highest possible so I can use a smaller capacitor but it doesn't really matter. I'm only switching to make DC so higher is better so long as the transistor can cope.

Fair enough. You will face challenges switching the mosfet very rapidly. It will require a good low impedance driver circuit and very careful layout. At high frequencies the switching losses can get out of hand pretty quickly.

If you have a low impedance driver then you can switch very rapidly, but the values of di/dt that this mosfet can generate are huge. So unless you're very careful with the layout you can get feedback oscillations around switching points that increase the losses even more than using slower switching. When this happens you may even have to increase the gate drive resistance and slow it down, which also increases switching losses (but is the lesser of the two evils).

I notice you mention capacitors but make no mention of the magnetics, indeed make no reference to how you want to implement the filter. Can you outline what input and output voltage DC levels you require, whether or not they need a common ground, and what DC-DC converter topology you plan to use.

stuart0:
Fair enough. You will face challenges switching the mosfet very rapidly. It will require a good low impedance driver circuit and very careful layout. At high frequencies the switching losses can get out of hand pretty quickly.

Yes, this is switch-mode power electronics, its all about such issues. Consider the little box challenge: https://littleboxchallenge.com/

The winners use 35 to 250kHz variable switching frequency and GaNFETs and achieved 2kVA output in
a box size of 14 cubic inches or so. 145W/cubic inch (9kW/litre!) not bad for a power converter.

stuart0:
Fair enough. You will face challenges switching the mosfet very rapidly. It will require a good low impedance driver circuit and very careful layout. At high frequencies the switching losses can get out of hand pretty quickly.

If you have a low impedance driver then you can switch very rapidly, but the values of di/dt that this mosfet can generate are huge. So unless you're very careful with the layout you can get feedback oscillations around switching points that increase the losses even more than using slower switching. When this happens you may even have to increase the gate drive resistance and slow it down, which also increases switching losses (but is the lesser of the two evils).

I notice you mention capacitors but make no mention of the magnetics, indeed make no reference to how you want to implement the filter. Can you outline what input and output voltage DC levels you require, whether or not they need a common ground, and what DC-DC converter topology you plan to use.

With the power levels in question (200A REAL current nominal operation) and an inductive load (big-ass motor), I feel there's no benefit to me to start putting rheostats or huge dump resistors in my system to make an optimal filter. Thus I just planned in using a sufficiently large capacitor paralleled to ground in the main FET circuit to work with the existing line resistance. On paper it's hard to know the line resistance so I'll have to measure it in-situ and buy an appropriate cap. So: Battery, then motor and cap to chassis ground in parallel, then FET, then chassis ground. Presumably the FET would be driven by 12V car supply and a small logic level N-FET such as IRF3708 (my personal favourite). A large capacitor may be necessary if lower switching frequencies are required (maybe 1kHz) but since I have an inductive load, maybe this is a good thing as it will balance out the phase angle a bit, which I think should actually reduce the power consumption? Also I just happen to have half a dozen 100V 4700 MF caps laying around already.

The ground will be the car sheet metal for all sources. The LiPo DC source will be about 100V, the motor needs pretty much all of that at full throttle and everything in between, based on the PWM I put out (ignore the FET voltage posted above, I picked the wrong one. Mine can handle 100V but is otherwise similar). The load being a motor, I have no real concerns about ripple ruining electronics or anything like that. I guess I'm not too too concerned about rise and fall times either because motors an indeed most mechanical devices change much slower than any electronics driving them so the motor is more a bottleneck and I don't need to change its RPM on a dime anyway. 1/10 of a second is more than enough precision.

Gahhhrrrlic:
With the power levels in question (200A REAL current nominal operation) and an inductive load (big-ass motor), I feel there's no benefit to me to start putting rheostats or huge dump resistors in my system to make an optimal filter. Thus I just planned in using a sufficiently large capacitor paralleled to ground in the main FET circuit to work with the existing line resistance. On paper it's hard to know the line resistance so I'll have to measure it in-situ and buy an appropriate cap. So: Battery, then motor and cap to chassis ground in parallel, then FET, then chassis ground. Presumably the FET would be driven by 12V car supply and a small logic level N-FET such as IRF3708 (my personal favourite). A large capacitor may be necessary if lower switching frequencies are required (maybe 1kHz) but since I have an inductive load, maybe this is a good thing as it will balance out the phase angle a bit, which I think should actually reduce the power consumption? Also I just happen to have half a dozen 100V 4700 MF caps laying around already.

The ground will be the car sheet metal for all sources. The LiPo DC source will be about 100V, the motor needs pretty much all of that at full throttle and everything in between, based on the PWM I put out (ignore the FET voltage posted above, I picked the wrong one. Mine can handle 100V but is otherwise similar). The load being a motor, I have no real concerns about ripple ruining electronics or anything like that. I guess I'm not too too concerned about rise and fall times either because motors an indeed most mechanical devices change much slower than any electronics driving them so the motor is more a bottleneck and I don't need to change its RPM on a dime anyway. 1/10 of a second is more than enough precision.

  1. No, never put a large capacitor (or any capacitor other than a that contained in a proper snubber circuit) in parallel with your mosfet.

  2. You don't need DC voltage, only DC current. Allow the motor inductance to filter the current.

  3. You say the mosfet can handle 100 volts. I hope this isn't an absolute maxium rating. Please make sure you stay well away from absolute maximum ratings, as you will definitely get transient over-voltages.