A program needs some way to know where a string stops. After all - a string is just some byes in memory, there's nothing special about the bytes immediately after your string.
There are three ways to do this.
1 - fixed width. Whenever you work with a string, it's always a string of however many characters and the code that used the string is simply expected to know that. For instance, PIN numbers might be strings of four characters, no more and no less. Everything that works with PIN numbers is expected to know that.
Obviously, this won't work for anything that needs strings of varying length.
2 - a character count. Whenever you use strings that might vary in length, then a count must be passed as part of the string. This is how it used to be done in - for instance - Pascal. You can either hold a count as a separate parameter, or you might adopt a convention that the first byte of the string is actualy a count (limiting you to 255 characers).
C does supply quite a bit of support for these two methods by whay of the str*n*X functions - strncpy, strnlen, etc.
3 - a terminator character. This is the C method - strings have a trailing '\0', an ASCII NUL. Disadvantages of this are
* strings can never have '\0' as a character in them
* the only way to know how long a string is to scan it
* you always have to make room for that trailing '\0'
* more compact - you only ever need one extra byte
* unlimited length
* architecture independence
* the C idiom for string copy:
while(*p++ = *q++); compiles down to two machine-language instructions on the PDP-11, which is the machine that the guys who invented C were using.
C provides support for this third method in that
* the libc functions that work with strings expect this, and
* in the C language, string literals are turned into nul-terminated arrays of char by the compiler.