# Can anyone explain RMS vs average? (particularly the math portion, and the intuition)?

My current understanding of RMS (Root Mean Square) is that it is the equivalent AC voltage/current/power necessary to drive a resistive load the same around as the DC voltage/current/power counterpart. However, I am a bit confused about the math involved that can apply to all waveform.

I know the conversation factor for sine waves is 1/ sqrt(2),

and that the conversion factor for square waves is 1:1 (no difference)

On my graphing calculator, if I take the square root of the definite integral (from 0 to pi, integrating the x ) of the sin(x)^2, then divide that by the period, pi, I get the correct answer. However, what is the purpose of the squaring, then square-rooting? is it purely to perform a absolute value mathematically? If so, then I should be able to simply integrate the absolute value of the sine, and divide the result the the period the integration was performed over. If it period it was performed over was 0-pi, then there is no need to even take the absolute value! But this method returnees 0.636619... not 0.707106...

Intuitively, it makes much more sense to me to *somehow* take the mean average of the continues |sin(x)| function. I know this can be done with desecrate numbers (a sequence, perhaps) and by taking the sum of them, and dividing it by the number of 'elements.' (series's, anyone?). I would expect this average is what determined the average power used by a load.

EDIT: Wikipedia states that "The true RMS value is actually proportional to the square-root of the average of the square of the curve, and not to the average of the absolute value of the curve." So it appears I have just proven this. But Why???? What is so special about the squaring and rooting???

Please don't bombard me with heavy math, as I am not familiar with calculus, and have barely passed pre-calculus (The class was supposed to touch upon series and sequences, and integrals, derivatives, limits, etc. but due to show days and delays, the class only covered trigonometry --the bare essentials--). I do not yet fully understand single-variable integrals, let alone multi-variable calculus. I really just want a better intuition of RMS. More than likely, there is probably some confusion I have about integrals in general, and how to calculate them on-paper.

## Discussions

A picture is worth 10^3 oops sorry worth a thousand words.

And the other is my graphical calculator ;-)

A

True, but the conversion only applies to the sine function. For square waves, triangle waves, ramp/saw-tooth waves, it is very much different. The reason I old really like to know RMS is so I can have a good intuition of how it works, and an easy way to calculator it for ANY signal, including noise.

You want to undrestand what the RMS of any sine, square, triangle wave form means or even a single non-repeating high voltage inductive spike means.

Well the Root_Mean_Square was developed to represent the

real actual heating valueof a wave form over the specified interval of time.This is a resistor in a calorimeter style of heat measurement.

In fact some early RMS meters used a thermocouple to arrive at the heating value to be indicated by the analog meter as an RMS output.

And some meters used to specify RMS for sinewave Only.

So there you have it RMS = Heat by a wave shape and a great deal of schoolbook PHD math to calculate a simple value.

But what do you do if your inverter has a screwy wave form that isn't in the book ?

How about a Riemann integral, Iv done some before calculators

http://en.wikipedia.org/wiki/Riemann_integral.

A

I think I know enough that I can punch numbers and integrals on my calculator, and come out with an answer, I assume m method described in my question works, but my intuition. I cant wait until I take calculus (but dread it as well :-/ )

Could you describe the Riemann integral, and how it differs from a definite integral? (I recently figured out what the difference between a antiderivitive, a indefinite integral, and definite integral was! gees, so many types of them, why not one that just works!?)

Beyond a textual response. I would have to cover calculus first.

A Riemann integral is simply a graphic calculation of a 2D oddball area.

Oh, Riemann integral are where one draws many small blocks underneath a function and geometrically solves the area of them! correct?

I see the picture of the slide rule you posted! EPIC! But can it do complex numbers?

Its my Versalog bamboo slide rule and Yes it can !

After all they are just imaginary values based on the square root of a negative number usually a

minus one.Just you try to find a number, that when multiplied by itself results a negative value :-)

complex and imaginary numbers, awesome, weird, and intuitively frustrating at the same time!

Derivations are like multiple dimensions work, all you need to remember is the non-school math become REAL Values of Xc, Xl and R that's all I really use.

So I have been busy, but have came back with one, hopefully last question: if the entire sine wave is forward biased (never going negative) will the RMS be equal to the average value?

I call that adding a DC offset and No to an average value.

Not sure without opening a book but, add the dc offset to the sinewave RMS.

I have made the assumption that taking the definite integral of a curve of formula over a range, and then dividing that

bythat range would result in the mean average of said curve/function. Is this correct? Perhaps this is where I am getting confused. I tried to do this with discrete points and basic arthritic by counting arbitrarily-chosen points, and making an average, vs dividing the points the area underneath the points and dividing that my the average.Yes. The average value of a function, over some interval of its domain, is equal to the definite integral over that interval divided by the length of the interval.

As an example, consider the motion of a car on a straight road, for 40 seconds of time.

In the first 10 seconds ( 0<t<10) it undergoes a constant acceleration from a speed of 0 (rest) to 20 m/s. Over the next 10 seconds (10<t<20) its speed is a constant 20 m/s. Over the last 20 seconds (20<t<40) the car slows, under constant negative acceleration, coming to rest (speed=0) at t=40 s.

I drew some graphs for this. The plot for the speed of the car is just piecewise straight lines, so even if you don't know calculus, you can find the area under this curve using geometry, because the shapes are just triangles and squares.

Anyway, if I did the math right, the area under this curve, equal to the total distance traveled in this 40 second time interval, is 500 meters.

Dividing this total distance travelled, 500 m, by the time it took, 40 s, gives an

average speedof 500m/40s = 12.5 m/s, which is the time-averaged value of the speed of the car, over that 40 second interval.I think the math for calculating RMS of some AC voltage waveform, also

a function of time, is similar. I mean you find the definite integral of V^2(t) over one period T, then divide that by T, to get the mean (average) value of the squared waveform, which has units of volts-squared. Then the last step is just taking the square root.By the way, learning calculus overnight is probably not humanly possible, but the definition of RMS for continuous-time signals necessarily has that definite integral in it.

I don't know if you know about discrete time signals, but it is kind of what it sounds like. A discrete time signal is only defined at discrete, and usually periodic, points in time. An easy example might be something like an automated weather station that samples the air temperature every 6 minutes. So that over one day, it collects (24 hours)*(10 samples/hour) = 240 samples. Then if you want to know the

averagetemperature for that day, it is just the sum of all those samples divided by 240, the number of samples in that time interval.What I'm saying is,

in discrete time integrals are replaced by sums, so that makes the math easier, if you do not yet know how to do integrals. If you want to calculate RMS values for sine waves, or triangle waves, or whatever, by just considering the waveform at several discrete points in time, that will probably give a good approximation to the answer you'd get using an integral over a true continuous time signal, if you use enough data points. You can probably do this with Octave, or with your graphing calculator. I'dguessa sum over maybe 100 samples per period, taken at regular intervals in time, would give a good approximation of an integral for a signal like V(t) = sin(2*pi*t/T), or V^2(t) = sin(2*pi*t/T)*sin(2*pi*t/T) = sin^2(2*pi*t/T). Note that when approximating an integral as a sum, the dt gets replaced by a delta-t, and that's just the time between your samples.Also the answer for an integral should come out in units that are in units of the range multiplied by the units of the domain. In the example above where I integrated over speed (in m/s) as a function of time (in s), the answer came out in units of just m, since (m/s)*(s) = (m)

"By the way, learning calculus overnight is probably not humanly possible"NO, I WILL learn it all!!! (well, maybe just integrals, summations, and averages) LOL :)but anyway, before I read your comment, I decided to try summing up sin(1π/10)+sin(2π/10)+sin(3π/10)+sin(4π/10)+sin(5π/10)+sin(6π/10)+

sin(7π/10)+sin(8π/10)+sin(9π/10)+sin(10π/10) and then dividing it by π in an attempt to get an average the 'simple' way. I ended up with 2.0091... suspiciously close to the area underneath the curve (∫ 0-π {sinx} = 2) so the mean average is probably 2!?!?

Now using desecrate math, (∑'s and all) I have summed up the first half cycle of a sine wive, split it up into 1000 sections, which results in, 636.6..., and multiply that by the interval (10th of a sine wave, π/10) to get the area under the curve, and I see 1.999998... :D progress!

To me, this proves the integral of the sine *IS* the average value, and vice versa. I am sure this cant be right, tomorrow I will recheck my work. right now it is past 1 so time for a nap I suppose lol!

If you take the average (mean) of a sine wave with no offset, you get zero. Duh :-) The RMS, as Kreegs said, is a measure of how much the curve (or whatever distribution you care about) _varies_ from its mean. If you have a normal (Gaussian or "bell shaped") distribution, then the RMS tells you the width of the distribution, while the mean tells you where the peak is. If you want something more concrete then you're just going to have to get into the guts of the calculus.

Why is any of this interesting for AC power? Because (a) the mean voltage of an AC distribution is *ZERO* (see above); and (b) we know d**ned well, from sticking screwdrivers into wall sockets, that the mean voltage doesn't describe why we get knocked onto our butts :-)

RMS is better than < |sin| > because it is tells you quantitatively how far, on average, the voltage _varies_ from zero. The mean absolute value underestimates the variation, because it doesn't properly account for the lower slope at the top of the curve. Why? You need calculus to answer that.

I already know that the positive part of the sine wave will cable out with the lower half, and the mean average would be 0 if there was no offset, that is why I said the average of the absolute value of the sine wave. It is just weird why this does not return the same result!

The RMS is defined as sqrt(<x^2> - <x>^2). That is NOT the same as <|x|>. You can work it out numerically for some simple examples.

Interesting, but what is x? also, what is < and >? they look greater than/less than symbols, but not used like so. Also, this does not look quite the same as what Wikipedia suggests.

'x' is whatever quantity it is that you want to compute the RMS. The notation "<x>" is used to represent the mean value (technically, the expectation value, but in the case of a randomly distributed variable, the two are equal in the limit of infinite statistics). You may also have seen the notation for average as a bar drawn over the symbol: that's hard to do in pseudo-HTML comments.

It is straightforward algebra to prove that the variance, defined as the average of the squared difference between each value and the mean, is identical to the difference between the mean, squared, and the mean of the squared values.

The expression I wrote is more convenient for computation, because you can calculate it by accumulating the sum of values, and the sum of squared values, as you take data, and then do the averaging at the end. You don't have to collect, store, and make a second pass through all of the values.

I guess I am trying to wrap my head around at the connection between probability (Gaussian distributions, variance, randomly distributed variables, etc) and RMS. Interesting, RMS is definitely not a mean average, as that would be 2/π. ( ∫ 0-π {sinx} = 2; 2 * 1/π = 2/π) I see where you state this underestimates, and I proved this was by literally adding up sin(1π/10)+sin(2π/10)+sin(3π/10)+sin(4π/10)+sin(5π/10)+sin(6π/10)+sin(7π/10)+sin(8π/10)+sin(9π/10)+sin(10π/10). This could have been done with ∑, but I am not confident with that format. Anyway, by answer was 6.1375... and when I divided it by π, I got suspiciously close to 2! (2.0097 to be exact!)

The RMS is actually a standard deviation representation. So what are you doing is figuring the Standard Deviation of each point from the mean. You find the mean, then for each element in the series you subtract the mean from that number then square it to get rid of the negatives. From there, you take the sqrt of the average of those numbers.

http://en.wikipedia.org/wiki/Standard_deviation#Ba...

Its a little more complicated than that, but to keep it simple that's the best way to explain it without calculus.

I recall computing deviation in chemistry 112 lab, as in calculating the mean and subtracting individual experimental results from that, and recording the absolute value. However, why cant I just use absolute values? squaring and square rooting not only seems like a workaround of something, but also returns a different result, due to the order of operations.

Could you please elaborate on how this is actually a form of deviation? I do have a little bit of calculus understanding, if not, i'll Google the concepts!

RMS is just another way to measure the magnitude, the size, bigness, of an alternating signal.

I think the reason why it has a square, then square root, in its formulation is because that sort of dovetails with the squares found in expressions for electrical power.

In the context of electrical engineering, the signal being measured is usually a voltage V(t) or a current I(t).

The square of those signals, V(t)*V(t)= V^2(t), or I(t)*I(t)=I^2(t), those squares are

almostthe same as the formula for instantaneous power through a resistive load:P(t) = V^2(t)/R = I^2*R

The only thing missing is the R.

Integrating P(t)=V^2(t)/R over one period (like from t=0 to t=T) of gives the amount of energy dissipated by R each period.

Energy delivered in one period

= integral(P(t))= integral(V^2(t)/R)= (1/R)*integral(V^2(t))

This quantity of energy divided by T, is time-averaged power.

These are the same steps involved in calculating RMS voltage, except there's no explicit resistor R.

So roughly speaking, RMS, or rather the square of RMS (just MS?), is a measure of the average "power" in a signal, or the "energy" it can deliver per cycle.

The last step, taking the square root, brings the units back to the same units as the signal; e.g. if the signal is measured in volts, then the RMS value is also in volts.