Drexel dragonThe Math ForumDonate to the Math Forum

Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.

Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Plotting data with microsecond accuracy
Replies: 3   Last Post: Apr 17, 2013 10:05 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]

Posts: 8,647
Registered: 6/7/07
Re: Plotting data with microsecond accuracy
Posted: Apr 17, 2013 10:05 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

On 4/17/2013 2:47 AM, Kevin wrote:
> dpb <none@non.net> wrote in message <kkk2p7$a73$1@speranza.aioe.org>...
>> On 4/16/2013 12:08 PM, Kevin wrote:
>> > I am trying to plot voltage values that have been sampled at 50 us
>> > intervals. Each of these samples have been timestamped so that they can
>> > be compared with other measurements. I've converted the timestamps to
>> > datenum values but when I use the PLOT function, the x-axis is

>> scaled in
>> > such a way that all of my data is compressed to the centre of the plot.
>> > I have tried manipulating the axis using xlim and datetick but nothing
>> > changes. What I really need to be able to is zoom in on specific areas
>> > of the plot while safegaurding the timestamps and scaling the axis
>> > accordingly. Any advice would be much appreciated.
>> >

>> If you're coding in the absolute clock time to the us the problem is
>> that the resolution of DATENUM is being stretched to its limits. I
>> just tried here and the best I can get w/ xlim on the x-axis w/ a
>> current year datenum magnitude is roughly 10E-4 sec whereas the
>> resolution of 1 ms on a datenum is otoo 10E0-8. Thus, while I could
>> get a wider vertical line than the default single pixel, it is still
>> essentially a vertical line.
>> If you need to also know the real time associated w/ that record I
>> think you'll have to keep that separate of the actual time record--I'd
>> probably just count the time series itself in us from wherever sample
>> starts in the record and then if need to date it write that as a text
>> string.
>> --

> Keeping the times as separate strings was one option I was looking into.
> However, everything I read online states that the resolution of DATENUM
> is 10 us, 5 times smaller than what I need. In addition, if I subtract
> two subsequent timestamps, I can retrieve the original 50 us period so
> MATLAB definitely has the precision stored somewhere. It just seems like
> the plotting function can't handle the precision. Is it possible that
> the plotting functions remove the necessary precision? I would think not
> but I'm open to any ideas at this point.

OK, I mistated what was intending slightly -- the graphics limitation is
indeed apparently in the internals of the handle graphics objects as, as
noted, when I attempted to set the smaller resolution on an axis it
rounded to a larger range than that given explicitly and I was unable as
you to change that behavior. You can submit a bug report to official
TMW support and see if get any joy but that makes me wonder if
internally some of it HG may be implemented w/ single precision instead
of double. That thought is what I didn't state explicitly but got
transmuted thru the keyboard to the limitation on datenums
unintentionally. Altho I did not that when I created a series of 50
usec-spaced time values as sotoo of


and then did


that the spacing wasn't perfectly uniform but instead showed granularity
greater than that in simply


Since a datenum time is stored as the fraction of a day 50 usec is
50E-6/86400 ~ 5.8E-10. A double has roughly 15 decimals of precision so
there's only about 5 decimals of precision left which is less than that
of a single by a couple of positions. For display representation that
won't hurt but might really show up if one were trying to use the
datenum representation as an axis value for actual computations like
derivatives or somesuch for control, frequency analyses, etc.

All in all, it seems to me to be stretching the point to try to keep
clock time in the actual timestamp for a sampled waveform--I have to
admit I've not seen it done previously in some 40 years... :) It's
always been done by considering the transient or event to begin at time
zero and keep that time from there in the units of the sample rate or
multiples thereof and then just reference that to the clock time.


Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© The Math Forum 1994-2015. All Rights Reserved.