On 4/17/2013 4:11 PM, XLT_Frank wrote: > dpb <email@example.com> wrote in message <firstname.lastname@example.org>... >> On 4/17/2013 10:52 AM, XLT_Frank wrote: >> > dpb <email@example.com> wrote in message <firstname.lastname@example.org>... >> >> >> >> I'd instead use the integer sampling rate as the time instead of float >> >> since you want an exact comparison and that's iffy w/ floating point >> >> and then use ISMEMBER or INTERSECT. >> ... >> > Thank you for the update. I will explore those. I am trying to figure >> > out what do about time. I was just informed that my date and time is >> not >> > being provided as some form of serial number in integer format. Instead >> > it is going to be a column with the date, likely in day of year, while >> > the time column is going to be hh:mm:ss.ssss. It should match the IRIG >> > MIL-1553 Chapter 10 format from the range commander's tools. I guess I >> > could request that they provide the RTC data, which is millisec >> since on >> > time and then just use the traditional date time for labeling. >> >> That'll work just fine w/ the DOY; ML DATENUM is smart enough to deal >> with it. Example, >> >> >> datestr(datenum(2013,0,33)) >> ans = >> 02-Feb-2013 >> >> >> >> Any field passed to datenum that exceeds proper range for the >> date/time will propagate to next higher field correctly accounting for >> day/month, leap year, etc., etc. IOW, you don't need anything special >> to deal with it in Matlab simply. >> >> Is the sampling process buffered so it is a precision and can just be >> assumed from the ordinal count and the sampling rate or is it a >> polling process where there is jitter and the sampled timestamp is >> significant? Generally sampled data is more like the former for such >> sampling rates and the RTC is just bookkeeping to keep track of what >> that particular record means in the larger context of the experiment, >> datalogger, whatever it is... >> >> -- > > I am not sure how to answer that. I am still coming up to speed on how > the chapter 10 data is stored and the difference between the absolute > and relative time and how the 1553 message is sampled. > > I am now expecting my data to look like this (assuming a full date). I > am going to be using mfcsvread. > > data1.date(1) = '2013-03-01' > data1.time(1) = '18:42:55.496' > data1.sensorA(1) = -27 > > of course data2 would have a different time stamp and a different > sensor. I would then use interp1 with the datenum serial values of > data1.time and data2.time to ultimately create a data1.sensorB that is > just an interpolated value. So would the serial value of datenum support > the 1ms resolution of my data2.time values?
OK, I've not dealt w/ 1553-compliant datastreams so the details of how the timestamps are correlated w/ the actual data sampling is a little fuzzy (and didn't get much better after a brief look online at the document). Would indeed take an education to get fully up to speed on that.
What I was envisioning from the prior posting was that there was some data acquisition device acquiring a sampled event at 50 usec and that data was being time-stamped as it was being transmitted. In that case I could imagine that the sampled timestamp isn't really particularly pertinent to the data itself other than just as a general TOD indication. After reading the spec I'm not so sure about that--I guess the data acq device may also be on the same clock as the RTC and if triggered at that rate guess the two would be the same. Then again, I saw reference to an external clock so I think in the end it depends on the hardware in place as to how the data are being collected and you'll have to learn that from the folks generating the data.
If the DAQ is triggered independently, then I figured the actual sample rate of the data is more directly represented by simply the ordinal number of the record from the trigger time times the sample dt.
I think regarding your question concerning the ML datenum you'll be ok at the msec resolution level _as_long_as_ you generate all values consistently with the datenum() function and friends via either passing ordinal time fields as y,m,d,h,mn,s,fff and not by trying to do floating point computations outside those functions and then comparing the two--I can guarantee from results of some relatively recent threads here that that _will_ fail. As long as it is consistently within datenum(), then rounding will be consistent and you can rely on comparisons and transformations to/from string display format and internal storage formats.