Correct use of digital read and bool

you are using "address" related to variables and variable types in an ambiguous manner.
"address" in C (or other PLs) is related to "memory address".

When we assign a value to the variable, it is stored in this memory address.

A memory location where data is stored is the address of that data.

Here's the power of C and most computer languages have this,

I can write a function in C that will let me address bits. Is that not C?

I think the distinction is between you using C to write code to address individual bits, which of course is possible, and C having that capability already built in, which it does not.

Some processors have in their assembly language the ability to directly manipulate individual bits in their registers, there is nothing in C that matches that ability.

What I don't know is whether C compilers for processors with bit manipulation capability generate machine code utilising that capability when you write C code for individual bits.

That's why C has Libraries, the extensions can be shared. The standard libraries are examples of that.

Maybe I should just say that "you can do that with C, it's no big deal."

1 Like

as stated you cannot address single bits in C in the sense of addressing the memory location (and so being able to use pointer arithmetics on them) because the smallest memory size in C is char/byte, not bit.
Nonetheless, you can retrieve single bit values by AND-ing chars, ints, long or whatever by bitmaps. Therefore I would recommend very precise language usage and wording.

1 Like

I started writing code for pay in 1980 and I never would have guessed that I had to be careful until just now.

1 Like

my statement was not about coding but about describing what you're coding :sunglasses:

Welcome to the new world order

(have I mentioned already that I hate how "reply" does stupid things in this forum?)

1 Like

There is none that I know of. But I am also not aware that there exist processors without bit manipulating capabilities.

1 Like

Before I learnt C, and long before I came to this forum, I wrote assembly language for PICs. PICs typically have 1 word instructions to directly manipulate individual bits. I don't know about other ranges of processor, such as AVR and ARM, because I've never looked.

1 Like

To my knowledge there is no real processor without single bit operations - would be hard to do, you need it at least for registers. But there was a Java processor (not such a great success) which was essentially a JVM in silicon - and there is no "set bit" instruction in JVM bytecode - but a java VM does not count as a processor, does it?

have I mentioned already that I hate how "reply" does stupid things in this forum?
mhhmm - yes, I remember...
https://forum.arduino.cc/t/the-forum-architecture-and-appearance-are-completely-confusing-overloaded-and-unclear/1176974

Your knowledge is limited.

The AVR (if you consider it a "real" processor") lacks an instruction to set or clear bits in a general register (the SBI/CBI instructions only work on the low 32 I/O registers).

RISC-V is also very much a real processor. The base integer instruction set also has no instructions to set/clear/test single bits in a register (or in memory, for that matter). There is an extension (called "B", of all things) to add bit-manipulation instructions but small embedded implementations of RISC-V are likely not to have that extension.

The MOS 6502 (one of the most popular "real" processors of the late '70s through the '80s) has no instructions to set or clear a single bit in a register. It does have a BIT instruction to test bits, but it's not exactly a simple single-bit test instruction.

The Intel 8080 and 8086 (also real processors) have no instruction to set/clear/test a single bit.

The Arm CPU (I'm sure you've heard of that) has no instruction to set/clear/test a single bit.

Need I go on?

On most of those real processors you can do single-bit operations by using OR/AND/XOR and the like with an operand containing a single bit set (or a single bit clear when using AND to clear a bit), no different from how it's done in C (e.g., x |= 1<<5 to set bit 5).

2 Likes

If you're referring to the data types that the digitalWrite and digitalRead functions take as parameters or what they return, then yes, it's a good idea to trust the documentation. If you rely solely on looking under the hood then you can get into things that are not guaranteed (such as what specific data types or values are being used).

Think of it like depending on implementation-defined behavior, or depending on some private class members in your application. The implementation can change (just like it has with digitalWrite), as long as it still follows the guarantees that the documentation provides. If you're familiar with how stdio is typically implemented, you'll know that FILE is an opaque type but there's often an actual struct with implementation-defined members describing the open file (so getc() can be implemented as a macro). You can learn about these details by looking under the hood, but you can get into trouble if you depend on those details in your own code (because they can change in the future). There's a contract (hopefully documented) between the implementor (of a language and/or library) and the application. If it's not in the contract, then the implementor is free to change it; if the application relied on things that changed and weren't agreed upon in the contract, then that's just too bad for the application.

Other than that, looking under the hood to understand how something works is not a bad thing, and I don't see anyone suggesting it is, so it's a bit of a strawman.

Only if you want your code to continue working in 1, 5, 10, 20 ... years. I've been around the software world long enough (even 10 years should be long enough) to actually see code break, sometimes in strange ways, because it didn't "obey the standard".

We're not suggesting that either. It's a great way to learn how stuff works (see also looking under the hood).

Edit to add: You might be too young to remember, but some pieces of software depended on undocumented instructions in CPUs like the 6502. Those undocumented instructions were useful since they usually did multiple things, all in one instruction, that the program needed to do (e.g., increment this register and XOR it with that register, or whatever), but the instructions were "accidental" since they weren't designed into the CPU but were rather simply how the CPU behaved when it came across those opcodes. Guess what happened when the CPU was updated a year or whatever later and those undocumented instructions no longer did exactly the same thing anymore? Those programs no longer worked or at least didn't work the way they intended (sometimes in subtle ways). They depended on non-guaranteed behavior and broke because of it.

Not to digress, I still have my C1P computer, based on the 6502

Yes, with BASIC in ROM

1 Like

AVR has 256 instructions, they're all 1 word and mostly 1 cycle.

IMO where AVR goes big is the 32 GP CPU registers.

I see again and again the question: is it int ? byte ? char?

No, no, no.

Bool was probably never meant to be stored, it was meant to determine to execute a jump-instruction or not.

This leaves each compiler free, for the rare cases that a bool variable needs to be stored, to pick it’s own way, to pick a datatype that may be evaluated to a bool later on.

The size and representation of a bool is implementation-defined, so as long as you store and load by the same implementation, it should be safe. It's no more or less safe than storing and loading an int or a long or what have you.

(If you want to be portable, serialize and store your data to a known format so it can be loaded by any implementation. There are many standard serialization formats to choose from. The nice thing about standards is there are so many to choose from! See Cap'n Proto for one example (it's a "cerealization protocol" :laughing:).)

Reminds me of the rules for Redcode.