Trouble mixing up data types

I've found that there are times when I have mistakenly mixed data types. This has caused some odd errors. I define my variable data type at the top of the program. But when my program becomes rather lengthy, I will forget what a certain variable has been defined as.

I was wondering if there is a programming method where it would be easy to know what type a variable is. For Example, if I want to have and integer variable named Total. Would a naming convention like placing a small 'i' at the front of the integer name be appropriate? Then the integer Total would be named iTotal. Then knowing what type a variable is not in question. Is there a way of doing this? Thanks Mike

when I worked professionally, the was no such guideline in the accepted styles. You typically understood what types were being used and if you made a mistake, the compiler typically let you know.

however, a common mistake is dividing two integers and expecting a float results. A common approach is the add a multiple by 1.0 which causes the compiler to perform a floating point divide

but another thing that pretty common practice is to always Capitalize at least the first letter of the name of a constant whether a #define or const integer and always make the starting letter of any variable lower-case

user defined types should have a suffix: enums "_e", structs "_s" and typedefs "_t" and as types, their first letter were often Capitalized

it's also common to use single letters as indices starting with i for integer, j, k, ... and of course n

the only other comment emphasized by Brian Kernighan in his last book was to make local variables short, "i" instead of "indexOfSensorArray, for example

Is there a way of doing this?

Sure. Just give the variables the name when you declare them. There is no way to do it automatically. You can go a long way to avoid the problem by giving the variables meaningful names. For instance, I would not expect a variable named yearOfBirth to be declared as byte

A better way to prevent such problems is to declare variables in the scope where they will be used instead of globally. This means that the declaration is usually not far from the use of the variable on the screen so is easy to check and that if you pass the variable to a function then incorrect definition of the function to accept the wrong data type will cause a compiler error

Thanks for the ideas. I seem to remember that in Fortran IV i thru n were by default integers. Maybe I can develop my own standard and then try to stick to it. Thanks Mike

You can never have too many standards :slight_smile: