Digital Scope Specs: wfms/s vs. GSa/s vs. Mpts/s vs. bandwidth?

Question say's it all. I understand the concept of analog bandwidth, being that the signal amplitude will be attenuated about 70% at the rated frequency, or something along those lines. I also think I understand the concept of sampling rate, being that the ADC inside the digital scope samples the input voltage level periodically (possibly billions of times per second), and then this gets stored in memory, and displayed.

However, then what does the "Mpts/s" figure mean? Is this telling me how many points of the signal can be stored in the memory to be plotted on the screen? how much of this is 'good' and 'good enough'? Also, what about Waveform Capture Rate (wfms/s) I really have no idea for what might be, How it 'waveform' defined? a single period of a repetitive signal, like a ramp wave or triangular wave? Perhaps also a arbitrary wave? What would this tell me? Again, how much of this is 'good' and 'good enough'?

sort by: active | newest | oldest
iceng2 years ago

Mpts is the long memory

Quote

"All digitizing scopes store samples into memory. As memory depth increases, the scope can store more samples into memory.

The
higher the number of samples that are stored in memory, the higher the
sample rate. Thus, deeper memory allows you to sustain the maximum
sample rate specified on the scope across a wider range of timebase
settings.

Keep in mind, too, that due to a higher sustained sample
rate, deeper memory will provide you more accurate and reliable
measurements"

http://cp.literature.agilent.com/litweb/pdf/5989-4...

Basically if you have a great giga-sample/s but low Mpst you cannot troll for a narrow spike in a long smooth waveform without enough long memory to hold that amount of data..

Otherwise you need to reduce the sample rate to fit the memory and may miss that narrow spike.

The PDF explains it more better !

-max- (author)  iceng2 years ago

So what you are saying is that there is no point in having large sample rates when the memory (Mp/s) cannot store all that data for the given time base (unless you take the time base wayy down and "zoom in" horizontally), similar to the pointlessness of having huge sample rate without any reasonable analog bandwidth. Do I understand correctly?

Are there caveats to when it comes to capturing?

iceng -max-2 years ago

That is a correct understanding from the Agilent literature.

You can still go top GSa/s if you can trigger on the waveform or spike in question, but it takes Long Memory to capture waste time when trying to find a finger pulse once in a second.

I suspect capture caveats would be in the unit's software, which depending on brand could provide upgrades.

-max- (author)  iceng2 years ago

Well, for lab equipment, Agilent = Keysight now. I personally hate the new name and branding, but oh well.

I wonder why they cannot design an ADC that rather than being dormant for the majority of the time, and then making a very quick measurement, (leaving lots of dead time for stuff to be missed) and instead it took the average for the period, (perhaps using a capacitor as an integrator to avoid timely calculations otherwise) then the burst of that data measurement was sent off to the processing and memory, if it would have helped eliminate the issue. The only issue I can see is hardware limitations, or it being too complex or just not possible due to the functioning of ADCs, or somthing like that, I guess.

Anyway, what about segmented memory? From what I understand, is it does not store much data from the waste time between digital (and otherwise) signals, so essentially a form of compressing the data in the most lossless way possible.

You think that's bad ? Imagine how we all felt when they destroyed the name of Hewlett and Packard and split the instrumentation into Agilent......

-max- (author)  steveastrouk2 years ago

Equally... Although I have not personally had much of a good experience with HP, at least not their professional grade testing equipment. For me, HP is the company that made the 'probooks' at school, and not a single one of them is operational, and IT refuse to fix them, or have tried with failed attempts. (they run as if the hard drive is failing or really, *really* slow.) The school got them as hand-me-downs from a better VCCS college, and now they are planning to give them away to another poor school to get rid of them... in favor of dell books. I am certain that their line of pro equipment back in the day was far better though, and I know that some really old HP computers and printers that have held up well even to this day too.

HP computers <> HP instrumentation.

iceng -max-2 years ago

Yea, but you have to train your scope to look for the snake in the grass and then speed up sampling to show its fangs.. If You can do that, might as well just set the trigger to grab at top Giga-Sample.... That can be done by most scopes.

Where long memory is really useful is when you have that glitch but need to see time before it bites.

Maybe be roll back writing over memory until you punch a Stop key.

Remember there is a high speed micro executing code between each nano-sample and you can't overburden it with tasking to analyze 3 to 5 successive points to decide which memory to use etc etc.. That's when we use multiple scopes and Who is going to review giga-reams of data.

There's usually a smarter way to trap a glitch.

-max- (author)  iceng2 years ago

Well, I wonder if deep memory is useful for data logging functionality? I believe the scope I am looking into (DS1054Z) allows exporting of data through USB and LXI ethernet.