Welcome Guest,Register Now
Log In

ANT Forum

Welcome guest, please Login or Register


C++ decoding: GetTimestamp() always returns FIT_UINT32_INVALID


Total Posts: 7

Joined 2017-02-08


I am using C++ and am suffering the following problem. Never a valid GetTimestamp() is returned, always FIT_UINT32_INVALID.

void fitListener::OnMesg(fit::RecordMesgmesg){
      int HFvalue 
FIT_DATE_TIME timestamp mesg.GetTimestamp(); //aantal seconden sinds, 32bits
if (timestamp == FIT_UINT32_INVALID)
std::wcout << "Invalid time stamp :(\n";
std::wcout <<  "VALID time stamp %d:(\n";
std::wcout <<  "heart_rate at (" << timestamp << "): " << HFvalue << "\n";

The only place where those fields are created is in
fit_decode.cpp:541Field timestampField Field(Profile::MESG_RECORDProfile::RECORD_MESG_TIMESTAMP); 

but I never reach the breakpoint placed there.

I have succesfully decoded it with the Java FIT2csv tool and it does include timestamps ofcourse.

Any place I should look? Can this be a specific C++ item? Are there people using C++ for decoding? On Linux/64 bit using GCC?

Looking forward to any reply

Total Posts: 7

Joined 2017-02-08


Hmmm, found the bug!

Linux uses the https://en.wikipedia.org/wiki/64-bit_computing

LP64, I32LP64

data model, but Windows the LLP64, IL32P64.

This has consequences for the data defined in
namespace fit
 51 {
#if defined(FIT_USE_STDINT_H)
53 #pragma message "Using STDINT.H for type definitions"
54    typedef ::int8_t           int8_t;
55    typedef ::int16_t          int16_t;
56    typedef ::int32_t          int32_t;
57    typedef ::int64_t          int64_t;
58    typedef ::uint8_t          uint8_t;
59    typedef ::uint16_t         uint16_t;
60    typedef ::uint32_t         uint32_t;
61    typedef ::uint64_t         uint64_t;
62 #else
#pragma message "Using platform dependant types for type definitions"
65    typedef unsigned char        uint8_t;
66    typedef unsigned short       uint16_t;
67    typedef unsigned long        uint32_t;
68    typedef unsigned long long   uint64_t;
69    typedef signed char          int8_t;
70    typedef signed short         int16_t;
71    typedef signed long          int32_t;
72    typedef signed long long     int64_t;
73 #endif
74 } 

now, a lot of long variants have a size of 64 bits. This has consequences for:

 107 FIT_UINT8 FieldBase
::GetNumValues(void) const
108 {
if (!IsValid())
110         return 0;
if (GetType() != FIT_BASE_TYPE_STRING{
 113         int size 
114         int gtype GetType();
115         int idx gtype FIT_BASE_TYPE_NUM_MASK;
116         int btsize baseTypeSizes[idx];
return (FIT_UINT8)(size btsize);
119     }
return (FIT_UINT8)stringIndexes.size();
122 } 

because of
const FIT_UINT8 baseTypeSizes[FIT_BASE_TYPES] =
32     {
 33         sizeof
34         sizeof(FIT_SINT8),
35         sizeof(FIT_UINT8),
36         sizeof(FIT_SINT16),
37         sizeof(FIT_UINT16),
38         sizeof(FIT_SINT32),
39         sizeof(FIT_UINT32),
40         sizeof(FIT_STRING),
41         sizeof(FIT_FLOAT32),
42         sizeof(FIT_FLOAT64),
43         sizeof(FIT_UINT8Z),
44         sizeof(FIT_UINT16Z),
45         sizeof(FIT_UINT32Z),
46         sizeof(FIT_BYTE),
47         sizeof(FIT_SINT64),
48         sizeof(FIT_UINT64),
49         sizeof(FIT_UINT64Z),
50     }

which now are terribly incorrect.

baseTypeSizes[] now holds too large values for Linux systems. Accordingly, some single field values now return a 0 for GetNumValues because of the increased denominator. This causes fields not to be added when decoding:

if (field.GetNumValues() > 0)
817  {
 818      mesg
819  } 

A modification of fit_config.hpp is required, for linux compilation and functioning:
#ifdef __linux__
28    #define FIT_USE_STDINT_H            // Define to use stdint.h types. By default size in bytes of integer types assumed to be char=1, short=2, long=4.
29 #endif 

So please Garmin/Dynastream, FIX THIS!!!      

Total Posts: 68

Joined 0


Absolutely! Great Catch!

Our next SDK release is planned for February 15th, We will be sure that this is addressed in that version!