50 Years of Assembly programming now learning C++ This old dog is having trouble learning new tricks.
Looking for a clean way of doing a uint8_t to char array conversion. It was easier for me to write the background I2C LCD interrupt lib than figure out simple string handling so please help for some reason I just cant get my head around this.
I have the following code working but I cant believe there is no way to do this with out writing a looping function. (Next question will be the same but HEX/BIN instead of DEC)
This is to build UDP packet for ASCII readable micro to micro messaging.
uint8_t myIP = 123;
char LCD_Line[] = "###-###- #####-#####";
foo(LCD_Line, myIP, 4); // Convert myIP to string and copy to char array starting at pos 4
// without adding null
LCD.Print(LCD_Line); //Example only Do not actually want to print or display.
There isn’t a sprintf format specifier for binary that I know of. You’ll have to write a integer to string function yourself, and then sprintf the string buffer (%s).
If it’s only ever a byte, you could use %08x to print in hex, then write a function which takes a byte and converts to uint32_t a bit at a time, shifting 4 bits at a time into the result (each nibble is other 0x0 or 0x1 representing a single binary digit).
+k for the excellent way to describe what you have and what you want to achieve (I am ignoring/correcting few mistakes) -- the style demonstrates the beauty of your faculty as an Assembly Language Programmer.
The following sketch may be helpful for you.
uint16_t myIP = 321;
char LCD_Line[] = "###-###- #####-#####";
void setup()
{
Serial.begin(9600);
foo(LCD_Line, myIP, 4); // Convert myIP to string and copy to char array starting at pos 4
// without adding null
Serial.println(LCD_Line);
//###-321- #####-#####
}
void loop()
{
}
void foo(char LCD_Line[], uint16_t myIP, int i)
{
char myArray[3]; //0 1 2
int j = 0;
do
{
myArray[j] = myIP % 10 + '0';
myIP = myIP / 10;
LCD_Line[i+2] = myArray[j];
//Serial.println(LCD_Line[i+2], HEX);
i--;
j++;
}
while (myIP != 0);
}
jrpjim001:
50 Years of Assembly programming now learning C++ This old dog is having trouble learning new tricks.
Looking for a clean way of doing a uint8_t to char array conversion.
the C languages do not specify the number of bytes in a short, int or long are. eventually, stdint.h was created that was specific about the number of bits and signs: int8_t, uint8_, int16_t, int32_t, int_64t.
in other words uint8_t is the same as char (or unsigned char)
char, signed char, and unsigned char are three distinct types in C/C++. The other integer types do not share that trait (e.g. int and signed int are synonyms).
I assume that distinction is important for overloading and reflection but I could easily be mistaken. In general, for a given compiler + platform, it's a distinction without a difference.
[quote author=Coding Badly date=1593199507 link=msg=4656254] char, signed char, and unsigned char are three distinct types in C/C++.[/quote]
what is the distinction between signed char and char? why aren't they synonymous like int and signed int?
stdint.h:typedef unsigned char uint8_t;
there aren't that many primitive data types (see Arduino/hardware/tools/avr/avr/include)
include grep typedef stdint.h | sort
Since these typedefs are mandated by the C99 standard, they are preferred
over rolling your own typedefs. */
typedef int16_t int_least16_t;
typedef int16_t int_fast16_t;
typedef int16_t intptr_t;
typedef int32_t int_least32_t;
typedef int32_t int_fast32_t;
typedef int32_t intmax_t;
typedef int64_t int_least64_t;
typedef int64_t int_fast64_t;
typedef int64_t intmax_t;
typedef int8_t int_least8_t;
typedef int8_t int_fast8_t;
typedef signed char int8_t;
typedef signed int int16_t __attribute__ ((__mode__ (__HI__)));
typedef signed int int16_t;
typedef signed int int32_t __attribute__ ((__mode__ (__SI__)));
typedef signed int int64_t __attribute__((__mode__(__DI__)));
typedef signed int int8_t __attribute__((__mode__(__QI__)));
typedef signed long int int32_t;
typedef signed long long int int64_t;
typedef uint16_t uint_fast16_t;
typedef uint16_t uint_least16_t;
typedef uint16_t uintptr_t;
typedef uint32_t uint_fast32_t;
typedef uint32_t uint_least32_t;
typedef uint32_t uintmax_t;
typedef uint64_t uint_fast64_t;
typedef uint64_t uint_least64_t;
typedef uint64_t uintmax_t;
typedef uint8_t uint_least8_t;
typedef uint8_t uint_fast8_t;
typedef unsigned char uint8_t;
typedef unsigned int uint16_t __attribute__ ((__mode__ (__HI__)));
typedef unsigned int uint16_t;
typedef unsigned int uint32_t __attribute__ ((__mode__ (__SI__)));
typedef unsigned int uint64_t __attribute__((__mode__(__DI__)));
typedef unsigned int uint8_t __attribute__((__mode__(__QI__)));
typedef unsigned long int uint32_t;
typedef unsigned long long int uint64_t;
gcjr:
what is the distinction between signed char and char?
From a practical perspective I have no idea. C / C++ provides implicit conversion so, as I stated earlier, having char a distinct type strikes me as a distinction without a difference. For example, if char is signed then operations with signed char are seamless.
gcjr:
why aren't they synonymous like int and signed int?
A question only the standards committee can answer.
is the distinction between char and signed char (and unsigned char) that a signed/unsigned char is considered a numeric value when printed for example and an unqualified char an ascii character? that would be unique to char (vs int)
It is controlled by the overloaded print() function. There is a different prototype for each data type, if you call it with an unsigned char, for example, it will invoke the function that prints numerically, and so on... it only reflects how the print() function authors believe the types should be represented textually. They could be "right" or "wrong" with respect to the C/C++ standards. But it is a kind of smoking gun.
aarg:
It is controlled by the overloaded print() function. There is a different prototype for each data type, if you call it with an unsigned char, for example, it will invoke the function that prints numerically, and so on...
Given:
signed char x2 = 0x41;
Serial.print(x2); command shows: 65 -- a numerical value.
It could show 41 which is also a numerical value; but, the print() method does not show the value in hex base.
So, my understanding is:
The compiler looks at the data type; if it begins with the "signed modifier", the Serial.print() method is transformed as follows:
Serial.print(x2);
==> Serial.print(x2, DEC); //the decimal base is invoked
==> Serial.print(65, DEC);
==> Serial.write(0x36); //shows: 6; 0x36 is the ASCII code of 6
==> Serial.write(0x35); //shows: 5
▒ ▒ ▒
136 -120 -120
136 4294967176 4294967176
a is pos
b is neg
c is neg
#include <stdio.h>
unsigned char a = 0x88;
signed char b = 0x88;
char c = 0x88;
int
main () {
printf (" %c %c %c\n", a, b, c);
printf (" %d %d %d\n", a, b, c);
printf (" %u %u %u\n", a, b, c);
printf (" a is %s\n", 0 > a ? "neg" : "pos");
printf (" b is %s\n", 0 > b ? "neg" : "pos");
printf (" c is %s\n", 0 > c ? "neg" : "pos");
return 0;
}