I know this is probably a very basic question. But I am curious. Because can't #define also be a string? Is this a compiler thing?
Can you please explain your question more fully, perhaps with an example.
#define a_value 5
#define a_string "the"
#define's are handled by the pre-processor, which basically does find/replace with them.
There is no type checking on #defines until after they're substituted in. If that then produces a syntax error, you'll get whatever error that results in.
It can be a string, but it must be used in a context where the string makes sense. Give us an example of what you want to do.
KeithRB:
It can be a string, but it must be used in a context where the string makes sense. Give us an example of what you want to do.
Maybe you should give me an example of what you mean. Because I'm not necessarily trying to do anything. I was just reading some C code that couples with my LabVIEW code and I just randomly had this question.
Are you perhaps thinking of the preprocessor stringify operator?
#define a_value 5
#define another_value "this is 5"
void loop() {
int compare_value = 5;
if (compare_value == a_value) {
printf(another_value)
}
}
In my defines I have what could arguably be an int and a string. So when I define, what are the data types? Is "a_value" defined as an integer at compile time?
For example
#define myName "KeithRB"
char buffer[16];
int i;
strcpy(buffer, myName); // OK
i = myName; //notOK
Conceptually, the compiler goes through your code and everywhere it sees a_value it replaces it with 5.
Types are not involved which is why you can do this:
#define maxxie(a,b) ((a)>(b)? (a): (b))
The types depend on the type of a and b, and can be anything.
So the compiler finds the appropriate data type for the constant?
My point is that we don't write:
#define int a_value 5
#define char words[16] another_value "this is 5"
In my defines I have what could arguably be an int and a string. So when I define, what are the data types? Is "a_value" defined as an integer at compile time?
A #define associates a name and a value. The name does NOT have a type. The value does NOT have a type.
Where the name is used in the code gives the value that is substituted in its place its type.
#define does a string substitution in the source before it hits the compiler "proper". There is no type involved. For example:
#define FOO if
FOO (a > 42)
{
Serial.println ("a is greater than 42");
}
In my example the word FOO is replaced by "if" and thus the "if test" underneath works. Clearly there is no type involved here. Effectively, the compiler then sees (after the substitution):
if (a > 42)
{
Serial.println ("a is greater than 42");
}
#define MYCONSTANTINTEGER 3
#define MYCONSTANTSTRING "Hello Arduino"
void setup()
{
int a;
char str[10];
if(a > MYCONSTANTINTEGER)
{
}
else
{
}
if(strcmp(str, MYCONSTANTSTRING)==0)
{
}
else
{
}
}
After pre-compiling, the code that the compiler sees is
void setup()
{
int a;
char str[10];
if(a > 3)
{
}
else
{
}
if(strcmp(str, "Hello Arduino")==0)
{
}
else
{
}
}
And that is the code that is compiled.
JHawk88:
#define a_value 5
#define a_string "the"
When talking about defines that simply 'define' a literal value there is always a type involved, regardless of context. (and literals are the reason why you do not need to write something like #define int a_value 5
)
To simplify it a little, remove the define and just look at the data:
5
"the"
The first is an integer literal, and the second is a string literal. These have clearly defined types in the standard. As defines are copy-paste routines, the rules for literal types apply to the data, no different to copying in the literals yourself.
From section 2.14.2 of the standard:
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be represented
The table is basically showing the types in order of smallest to largest. And from this list, it is possible to determine what type your literal is, before your define is used in code.
If RTTI was enabled, you could verify this using typeid(decltype(5))
or typeof (GCC specific)
People get confused when using an integer literal in expressions. But integer literals simply follow the rules of integral promotion, and are implicitly cast to the largest type in the expression. More so floating point data has a higher precedence than integers and your literal can be implicitly converted to a float.
These conversions happen during compile time which makes them seem like they are changing types, however they are well defined conversions, or promotions, in the standard.
A string literal is clearly defined also:
7 A string literal that begins with u8, such as u8"asdf", is a UTF-8 string literal and is initialized with the given characters as encoded in UTF-8.
8 Ordinary string literals and UTF-8 string literals are also referred to as narrow string literals. A narrow string literal has type “array of n const char”, where n is the size of the string as defined below, and has static storage duration (3.7).
9 A string literal that begins with u, such as u"asdf", is a char16_t string literal. A char16_t string literal has type “array of n const char16_t”, where n is the size of the string as defined below; it has static storage duration and is initialized with the given characters. A single c-char may produce more than one char16_t character in the form of surrogate pairs.
... and so on
As for defines that substitute arbitrary code (nick gammons example) or a context dependent set of expressions (KeithRB's example). Then you must rely on your own ability to keep track of how the expressions are used in your code to ensure its correct. This assumption based 'C-style' programming is why C++ includes and has grown to provide many features like templates which allow you to keep everything in a well defined and type-safe domain.
But the defines you are questioning are fine, their contents is well defined even without context.
Nick put it well. #define string substitutions need not be data types at all:
// "valueless" symbol, useful with #ifdef and #ifndef
#define DEBUG
// function-like behavior, expands to code:
#define DEBUGOUT(string, val) do {if (debuging) {Serial.print(string " = "); Serial.println(val);}} while(0)
// things that expand to PARTIAL statements, helpful for some complex data structure definitions
#define MENU_START(menu_name) \
const menu_item_t menu_name[] menu_in_progmem_ = {
#define MENU_END(name) {0, 0} };
And #defines don't have to be data at all:
#define HANDLE_RESPONSE1(pcmd, errstate) \
pCmdQueue->PopResponse1(pcmd); \
if (((CmdMessage*)pcmd)->getStatus() != ATC_ERR_NOERR) \
{ \
PendingError = ((CmdMessage *)pcmd)->getStatus(); \
NextState = errstate; \
break; \
} \
else \
{ \
delete pcmd; \
pcmd = NULL; \
}
Regards,
Ray L.
And they don't have to be anything at all.
#define nothing
pYro_65:
When talking about defines that simply 'define' a literal value there is always a type involved, regardless of context.
A #define doesn't have a type. It is just a text substitution. The substituted text may well have a type.
If I may suggest, avoid #define and use const declarations. Or templates for more fancy stuff.
And what is substituted does have a type. It is also known before substitution. What I wrote was very clear:
there is always a type involved, regardless of context.
The type does not change depending on how you use it. It is already very clear.