Welcome Guest,Register Now
Log In

ANT Forum

Welcome guest, please Login or Register

   

FIT to CSV, incorrect utc_timestamp

Rank

Total Posts: 14

Joined 2017-04-10

PM

I'm currently parsing the fit-data from a Garmin Virb Ultra 30 via a csv file generated by the FitCSVTool.jar in the SDK.

There are some oddities in the utc timestamps. Since the camera can do 10Hz when logging points millisecond stamps are vital but every tenth utc_timestamp is wrong in the csv file.

For example, two points might have the following (relative) timestamp in the csv:
timestamp 2838s ... utc_timestamp 854514241s timestamp_ms 910 ms
timestamp 2839s 
... utc_timestamp 854514241s timestamp_ms 10 ms
timestamp 2839s 
... utc_timestamp 854514242s timestamp_ms 110 ms 


It seems the utc_timestamp constantly hits the next second one measure too late (I really need the utc number, although I can probably reconstruct it.

Could this be an issue with the Fit to csv tool or is this also in the original fit-file? Is anyone else seeing this?      
RankRankRank

Total Posts: 68

Joined 0

PM

The FitToCsv Tool just converts from the binary storage format of the .FIT file, so it is very likely that the issue here is with the data stored in the file. If you could attach a sample file I can verify it for you.      
Rank

Total Posts: 14

Joined 2017-04-10

PM

The .fit-file in the op was too large so I've attached another. In this case the utc_timestamp (seconds) seems to change digit even later. Maybe I'm not parsing the data correctly? I assumed that all timestamp were on the same "relative" timeline (milliseconds in sync so to speak). If not, perhaps there is some issue with the Ultra 30 - I'll check for firmware updates.

Had there been a uniform offset by a few seconds I could have made a temporary fix, but in this case I don't know.

Apologies for the dropbox-link - I'll remove it afterwards. I tried to attach a file but received an error message - I'm currently on a mobile network.

[EDIT: DROPBOX LINKED REMOVED]

EDIT:
Here's the file mentioned in the op:

[EDIT: DROPBOX LINKED REMOVED]

I'm thinking I'm parsing the csv incorrectly, but if you can confirm that the .fit is ok that would be great. Currently trying to reconstruct the timestamps using the values on the timestamp_correlation row.      
RankRankRank

Total Posts: 68

Joined 0

PM

Okay! So, the timestamp_ms field is meant to combined with the timestamp field not the utc_timestamp field, the timestamp_correlation message can then be used to determine the UTC time with milliseconds.      
Rank

Total Posts: 14

Joined 2017-04-10

PM

Ok, thanks. I was wondering about that. I'll just ignore the utc_timestamp, other than for rough comparisons then.

So if I want to create a "time zero", date + time, (counting from utc 1989-12-31 00:00:00.000), corresponding to the start of the telemetry stream, is the following correct for the timestamp_correlation row in the csv:

t_zero = timestamp + timestamp_ms - system_timestamp - system_timestamp_ms

e.g.:

t_zero = 854511660s + 0ms - 258s - 128ms

for the linked file 2017-01-28-05-16-40.fit?

Then, to get the "absolute" date time for an event later on, I add the timestamp + timestamp_ms on the corresponding row for the logged event. E.g. for a gps_metadata event with the following timestamps:

timestamp = 261s
timestamp_ms = 250ms

I'll do the following:

t_zero + 261s + 250ms

Does this look correct to you? It seems ok so far. I couldn't find system_timestamp in the documentation.

Eventually, I'd like to parse the binary directly, but that will have to wait since I'm currently working in Python. Until then the csv is fine and the logic still applies.      
RankRankRank

Total Posts: 68

Joined 0

PM

Your logic looks correct to me.      
Rank

Total Posts: 9

Joined 2018-02-28

PM

I tihnk any time given in millisecond unit must be divided by 1000 otherwise you end up summing 2 quantities with different units:

210 s + 234 ms = 444
210 s + 0.234 s = 210.234 s

ciao
     
Rank

Total Posts: 14

Joined 2017-04-10

PM

Filippo - 05 April 2018 08:04 PM
I tihnk any time given in millisecond unit must be divided by 1000 otherwise you end up summing 2 quantities with different units:

210 s + 234 ms = 444
210 s + 0.234 s = 210.234 s

ciao


No, look at my previous reply. It works fine so far and corresponds to when the time footage was recorded (this is on a Garmin Virb Ultra 30).

With GPS turned on, correlation values are recorded when the camera manages to sync with the satellite. This can be used as an offset value - seconds and milliseconds counting from 1989-12-31 00:00:00 - to produce absolute timestamps from the relative timestamps for all the recorded events in the fit-file.

Try looking at a CSV-file, converted using the FitCSVTool.jar, in e.g. Excel and something might click. Search for a row called "timestamp_correlation" (it should also begin with "Data", not "Definition").

A "normal" data message (e.g. gps_metadata) might have a (relative) timestamp like so:
timestamp 204 s ... timestamp_ms 850 ms

Whereas for the timestamp_correlation row you instead get the correlation value in the same column, e.g.:
timestamp 854511660 s ... timestamp_ms 0 ms

The "normal", relative timestamp can instead be found later on the the same row:
system_timestamp 258 s ... system_timestamp_ms 128 ms

I couldn't find any mention of system_timestamp in the documentation but it corresponds well to the rest of the file and to the actual time of recording.

If you, like me, are decoding the FIT-file directly you have to check what the corresponding definition message is telling you about the data structure of a specific type of data message.

     
Rank

Total Posts: 9

Joined 2018-02-28

PM

I followed exactely your post where you wrote the equation to obtain the Time0 as:
t_zero = timestamp + timestamp_ms - system_timestamp - system_timestamp_ms

And I am still using this. I think you get it right except the "/1000" thing.

here an example with my data from the fit file converted in csv using the FITtoCSV.bat file (I don't know java and I'm working in R).

Here the timestamp_correlation info:
--------------------------------------
Type Local.Number Message Field.1 Value.1 Units.1 Field.2 Value.2 Units.2 Field.3 Value.3 Units.3 Field.4
Data 0 timestamp_correlation timestamp 890585147 s system_timestamp 231 s local_timestamp 890570747 s timestamp_ms
Value.4 Units.4 Field.5 Value.5 Units.5
0 ms system_timestamp_ms 201 ms


here the gps_metadata data with the timestamp calculated dividing by 1000 the "ms" component. The colomn "timestamp" is the one I calculated from the Time zero. The colomn "utc_timestamp_OR" is the original one as it appers in the csv.

Message timestamp sys_stamp sys_stamp_ms utc_timestamp_OR lat_semicircles lon_semicircles lat lon
3226 gps_metadata 890585470 553 920 890585470 558107742 -850366331 46.78005 -71.27688
3227 gps_metadata 890585470 554 20 890585470 558107737 -850366326 46.78005 -71.27688
3228 gps_metadata 890585470 554 120 890585470 558107734 -850366323 46.78005 -71.27688

* for some (to me unknown reason) the values in 'timestamp' do not show decimals. However they have them because as you can see by the following check (gps_fit is the name of my dataframe)
> gps_fit[3228,'timestamp']==gps_fit[3227,'timestamp']
[1] FALSE
> gps_fit[3228,'timestamp']-gps_fit[3227,'timestamp']
[1] 0.1


here the gps_metadata data with the timestamp calculated NOT dividing by 1000 the "ms" component. The colomn "timestamp" is the one I calculated from the Time zero. The colomn "utc_timestamp_OR" is the original one as it appers in the csv.

Message timestamp sys_stamp sys_stamp_ms utc_timestamp_OR lat_semicircles lon_semicircles lat lon
3226 gps_metadata 890585269 553 920 890585470 558107742 -850366331 46.78005 -71.27688
3227 gps_metadata 890585269 554 20 890585470 558107737 -850366326 46.78005 -71.27688
3228 gps_metadata 890585269 554 120 890585470 558107734 -850366323 46.78005 -71.27688


890585470-890585269 = 201 that are exactly the ms not divided.

this is why I think you should divide by 1000.      
Rank

Total Posts: 14

Joined 2017-04-10

PM

No offence, but I think you misunderstand the earlier posts. Sorry if I was being unclear. I currently decode the fit-file directly and treat the values according to what they represent. So yes, in a way you could say there is a division by 1000, but there is never such an explicit process in my (and I presume other's) code since unnecessary conversions should be avoided and there are often better ways of processing the data.

The seconds value is stored as an unsigned 32-bit integer and the milliseconds value is stored as an unsigned 16-bit integer in the fit-file. Most modern programming languages will have ways of dealing with dates and time and will accept integer values and return a date or a duration (e.g. a "millisecond function" that takes an integer value, treats it as milliseconds and returns a "millisecond time object" that can be passed around and added to other time objects etc - this is your division by 1000 if you will, but it's a bit more complex than that).

My code, while much simpler than what you find in the SDK, works fine so far and correctly returns the values I expect. The timestamped kml-file my code can export contains the correct location and time for each logged point as far as I can see (though I don't think the kml format was meant to store 10Hz gps logs - I added a very crude way to downsample the whole thing if needed).      
Rank

Total Posts: 9

Joined 2018-02-28

PM

no offence taken at all!
I'm not a programmer by background so I didn't know the difference between 32-bit and 16-bit numbers.
In my R code If I don't divide it doesn't know there is a difference and I end up with a wrong timestamp.

good to know.

Thanks for your explanation,