BCD to decimal 16bit input

Hi,

Looking for some support, not an avid programmer neither do I have that level of skill on programming But here's the question..

Bit old school but have a 16bit parallel loading PLL chip that need 16 bit data to programme the output frequency.

Found some code to program an 8 bit input using digital pin code but when going to 16 bit the output just doesn't work and gives me all sorts of rubbish.
The first 8 bit work fine.

Because it is rubbish. Please just tell us about the PLL, the data format it expects, the source and format of the data you have and want to send to it. Also any technical links or documents that you have that might help us understand the problem.

You are reading a series of switches and building up a String such as "0110101". You then pass this String to a function convertBinaryToDecimal(long binary) which takes a long as a parameter, not a String.

It would be far simpler and easier to build up a number when reading the switches, not a String you then later want to convert to a number

If this is the code from that YT channel - unsubscribe!

@s1richar, please edit your opening post, select all code and click the </> button to apply code tags and next save your post. It makes it easier to read, easier to copy and prevents the forum software from incorrect interpretation of the code.

Yes was code from a YT channel never subscribed to it..
Think I have found the issue .. Not being too savvy but think it's because Nano and Mega only support 8bit and need to support 16bit ..65536. Although seem to be some threads on how to get round this, and is what I'm looking for..

Nano/Mega etc. deal with 16bit or even 32bit variables just fine.

Not sure what the issue is then if I look at My BCD output to decimal conversion then I get the following:
1,2,4,8,16,32,64,128,256,512,1824,1816
So Up to 10 bits OK then get erroneous values 1824,1816, (1024,2048)

Because you removed your code, nobody will ever be able to tell you. Not quite true as we can see the history :slight_smile:

Try this code

// offset of pin from counter in loop()
const uint8_t pinOffset = 25;


void setup()
{
  Serial.begin(57600);
  while (!Serial);
}

void loop()
{
  int number = 0;

  // wait for character to arrive, else the termonal will be flooded
  if (Serial.available() == 0)
  {
    return;
  }
  while (Serial.available() > 0)
  {
    Serial.read();
  }

  // read inputs
  for (uint8_t cnt = 0; cnt < 16; cnt++)
  {
    uint16_t bitVal = digitalRead(cnt + pinOffset);
    number |= bitVal << cnt;
  }

  // print result
  Serial.println(number, BIN);
  Serial.println(number, HEX);
  Serial.println();
}

It compiles, not tested.

Program waits for serial input (at 57600 baud) after which it will read the inputs and display the value. Next it waits again for some serial input.

There is a counter going from 0 to 15; an offset is added to get from 0 to pin 25, 1 to pin 26 etc.

Thanks, Yes I removed the code I guess it goes somewhere but this is a short summary of what I want it to do..

I read the data from the input pins of the nano / Mega, the format is parallel data so just 16 bit data in Binary format 1111 1111 0000 1111.. The data is an output from a CD4029 Binary up/down counter so I am using the pins accordingly.

The data is simply changed from Binary to decimal. Each bit change is a 5KHz step on my PLL legacy parallel input RF synthesiser, up to 16 bit input..

The original code is good reading the 1st 10 pins and decoding correctly so when it gets to 512, it should carry on to 1024 but it doesn't next step is 1824, then 1816 completely wrong..

/*

  This sketch converts an 8-Bit Binary number into a Decimal number.
  The Binary number is fed to the Arduino through an 8x DIP Switch.
  A function then converts this Binary number to its Decimal
  equivalent. These numbers are displayed on an OLED Display and Serial Monitor.

  This program is made by Shreyas for Electronics Champ YouTube Channel.
  Please subscribe to this channel. Thank You.

*/
//Include the libraries
#include <Wire.h>
#include <Adafruit_GFX.h>
#include <Adafruit_SSD1306.h>

//Initialize the variables
int bitVal;
String stringBit;
String stringBinary;
long binaryNumber;
int decimalNumber;
double frequency;

const double startingFrequency = 1240.000;
const double increment = .025;

#define SCREEN_WIDTH 128
#define SCREEN_HEIGHT 64

Adafruit_SSD1306 oled(SCREEN_WIDTH,SCREEN_HEIGHT);

void setup() {

  Serial.begin(9600);
/*
  //Sets pin 2 to pin 13 as input
  for (int x = 25; x < 41; x++) {

    pinMode(x, INPUT);
*/
 // }

  oled.begin(SSD1306_SWITCHCAPVCC, 0x3C);
  oled.clearDisplay();
  oled.clearDisplay();

}

void loop() {

  //Reads the Binary number from the DIP Switch
  for (int x = 25; x < 41; x++) {

    bitVal = digitalRead(x);
    stringBit = String(bitVal);
    stringBinary = stringBinary + stringBit;
    binaryNumber = stringBinary.toInt();

  }

  //Function to convert Binary to Decimal
  decimalNumber = convertBinaryToDecimal(binaryNumber);
  frequency = (decimalNumber - 1) * increment + startingFrequency;

  //Prints the Binary number on the Serial Monitor
  Serial.print("Binary: ");
  Serial.print(stringBinary);
  Serial.print("      ");

  //Prints the Decimal number on the Serial Monitor
  Serial.print("Decimal: ");
  Serial.println(decimalNumber);

  Serial.print("Frequency: ");
  Serial.println(frequency, 3);
  
  //Prints the Binary number on the OLED Display
  oled.clearDisplay();
  oled.setTextSize(1);
  oled.setTextColor(WHITE);
  oled.setCursor(2, 50);
  oled.print("RPos:");
  oled.println(stringBinary);

  //Prints the Decimal number on the OLED Display
  oled.setTextSize(2);
  oled.setTextColor(WHITE);
  oled.setCursor(2, 30);
  oled.print(frequency, 3);
  oled.setTextSize(1);
 // oled.setCursor(2, 30);
  oled.println("MHz");
     oled.setTextSize(1);
  oled.setTextColor(WHITE);
  oled.setCursor(2, 20);
  oled.setTextSize(1);
  oled.println("23cms ATV Band");
  oled.display();

  //Resets all the variables
  binaryNumber = 0;
  bitVal = 0;
  stringBit = "";
  stringBinary = "";
}
//Function to convert Binary to Decimal
long convertBinaryToDecimal(long binary) {

  long number = binary;
  long decimalVal = 1;
  
  long baseVal = 1;//Change This Value Default is 1
  
  long tempVal = number;
  long previousDigit;

  while (tempVal) {

    //Converts Binary to Decimal
    previousDigit = tempVal % 10;
    tempVal = tempVal / 10;
    decimalVal += previousDigit * baseVal;
    baseVal = baseVal * 2;//Default is 2
  }

  //Returns the Decimal number
  return decimalVal;
}

Which is it?

That won't be fun on a Nano, I think.

Can you post a schematic?

Have you tried running the convertBinaryToDecimal function with a couple of different inputs to verify it converts correctly throughout a 16-bit number range?
Have you verified that the microcontroller correctly reads each of the 16 bits?
I.e. have you played around with some diagnostic output to Serial to see where your code goes wrong?

1 Like

+1000

I haven't worked with BCD since college (a long-long time ago for me! :smiley: ) but I remember it being pretty easy. And I think I've got the logic (but no code for you)...

It depends on what the real BCD format is (bytes, 16-bit words, etc.) but I THINK you just have to read & "weight" one nybble at a time and sum them up.

Let's you I have a BCD value of 54 (0101 0100)...

You can bit-shift or mask, or whatever to read the "5" and the "4" separately.
4 is just 4 and 5 represents 50, and these can be read and treated as two separate values/variables.

We simply add 4 + 50 = 54 (duh!) :smiley: and we're done! We have regular integer that shows-up as 54 in decimal or you can optionally display it in hex (36) or binary (0011 0110).

Of course "inside" the computer/microcontroller everything is really binary, and in C/C++ variables/values are written and displayed in decimal by default.

...You don't have to save the separate BCD digit variables. You can make a loop and multiply by the weight and sum as you go-through the BCD number.

P.S.
For testing/troubleshooting/experimenting, the built-in Windows calculator in Programmer Mode can convert between decimal, hex, binary, or octal. It doesn't use BCD but it can convert one nybble at a time because this is just binary-to-decimal or vice-versa.

Using a Mega More pins..

Yeah the Mega reads the pins as it should and counts nicely in Binary so that means the pins are being read correctly. As soon as it passes 512 then the decimal output screws up!

TBH I have no real programming skills to associate what need to be done. My Son has just left for York university and he was my only get out of jail card..

There's no real schematic.

How it should work is a simple case of using DIP switches connected to pins 25 -->40 on the mega that gives the 16 bit line the output displays on the OLED display and conversion is don e code.

Start frequency is 1240MHz (00000000 00000000) each step adds in this case 25KHz spacings (The true value is determined by the PLL Circuit and you enter this value into code)

ATM Just playing with a rotary pot encoder .

Rotary encoder or pot? How is it connected to the Mega? How do you make a 16bit value from/with your rotary and/or pot?

OK!

I'll start from beginning:

I Have a simple rotary encoded which is a clocks a CD4029 up/down counter giving a 4 bit parallel binary output. 4 of these are cascaded to create 16 bit output..

A schematic I have but not needed.

That feeds in 16bit parallel data to my PLL synthesiser which is set up to increment at 5Khz steps.

I am trying to read that data into Mega and display as a frequency relative to the 16bit string.

image

RPos Represents Rotary Position in Binary.

Then I'm out. To me, it looks like a problem most of the helpers here could fix in less than 2 minutes, with all the information and requirements presented clearly.