Use of const is not going to "waste space." The advantage of const is that it provides type safety. You can do this with #defines, but is generally not done. And if you're providing type safety with #defines, you should probably be using const anyways as its easier to read and more concise. Use of defines also can have side effects for non-trivial use; sometimes even result in the wrong code being generated. Which is why the common use case of defines have long been deprecated by C++'s use of const.
Best practices and official recommendation is to use const where it makes sense and only use define where const breaks down. For the majority of Arduino users, it means const should be the preferred mechanism. Clearly that's not the case.
Now, some more detail. The compiler isn't JUST a compiler. Modern compilers have what is called a pre-processor. The pre-processor does several things. One of these things is to physically substitute defines with each occurrence before the compiler ever sees the code.
So when you have:
#define BAR 1
int foo( void ) {
return BAR ;
}
The compiler ever only sees:
int foo( void ) {
return 1 ;
}
Likewise, if you use:
const int BAR = 1 ;
int foo( void ) {
return BAR ;
}
The compiler actually does see a variable named BAR. And that's what scares people into believing it wastes space. The deal is, the compiler is very good at literal substitution. In fact, its one of those trivial things which is almost always first to be implemented in even ancient compilers. And so, the compiler is smart enough to not only realize how the variable BAR is used, but to see its strictly only ever used as a read-only, constant value. And so, the code the compiler generates is semantically the same as:
int foo( void ) {
return 1 ;
}
Notice its the same? The compiler will not waist space on the variable because it will effectively be compiled out; having been literally replaced. Even in the assembly, you should see that it does something like load the literal value 0x0001 into a register(s), rather than perform a memory lookup. Meaning, the generated instruction set will see it as a literal value rather than a variable to be looked up in memory. And since there is no look up in memory, the variable is compiled out. Meaning, it never existed in the compiled output.
Now then, use of const also provides additional benefits over that of define, which is type safety. You see, in the define above, the compiler just infers a type for the literal of '1'. This can create problems. Though generally not for the simple case given above. Whereas use of the const declaration, specifically informs the compiler the type of the variable thereby preventing the compiler from being forced to make wild assumptions which may result in the wrong assumption being made. To avoid this inference of the compiler, you would be doing something like:
#define BAR ((int)1)
Which as you can see, is comparable to:
const int BAR = 1 ;
Long story short, const should be preferred over define.