There is only binary, and hex is easier for humans to read as binary than decimal is.
So yeah... Everything in the computer/processor is binary and by default numbers are converted to and from decimal by C/C++. But you can optionally show the values (or input the values) as hex or binary.
Not every binary variable/value directly represents a "number"... For example, decimal 65 (=hex 41 =binary 00100001) might represent the number 65 or it could represent [u]ASCII[/u] letter "A" or it might be part of a color or a microprocessor instruction, etc. The software has to know the context/use to know what it represents. If you open a file with a hex editor you'll see the hex values and any values that can be represent ASCII will also show the ASCII character. Every place there is a 41 hex, you'll see "A" and the hex editor doesn't know if it represents an "A" or not.
Characters can get confusing because, for example, the ASCII character for the number "1" (which is printed/displayed) is represented by the decimal value 49 (you'll see that on the ASCII chart). Again, that's normally handled automatically in the background by C/C++ so when you print-out a variable with the value 1, you never know it's converted to 41 before being printed/displayed and you just see the "1". But, if you have the character "1" in a text file, it's stored as the value 41.
Also, hex or binary format can emphasize or highlight the fact that individual bits in a value are important,
Here's an example - I make lighting effects where the on/off state of an LED or light is represented by one bit in a variable. That makes things like chasing/sequencing super-easy, and it actually makes everything easier.
Say I have a string of 16 LEDs and ever-other LED is on so the "status" variable looks like this in binary:
0101010101010101 (binary)
That's kind of hard for humans to read & write (especially if it's not such a simple pattern like that). In hex that pattern looks like this:
5555 (hex) I did that conversion in my head.
In decimal it looks like:
21845 (decimal) I had to use a calculator.
So, here's the advantage to hex... Every nibble (group of 4 bits) converts exactly to one hex digit. Every time you see 5, the pattern is 0101. By memorizing 16 hex-to-binary conversions you easily can learn to convert variables of any size between hex and decimal in your head! About half of those conversions are super-easy to remember and you already know zero and one.
At one time, I made some flash cards to teach myself to make the 16 conversions.
Converting between decimal and binary is not so easy and it requires a calculator (except for some small numbers that you can remember.
You can do a TON of programming without using hex or binary. But when experienced programmers do need to work with bits or binary they use hex. (It's a lot more common with "low level" programming (microcontrollers) than when you're developing an application for Windows or a phone, etc.)