Decoding a digital signal?

Hello there!

I'm working on a project where I have a device that is putting out a digital data signal at 1200 bits/s.
The signal is basically a square wave that goes from 0 - 2.7V and I level shifted it up to 5V with a hex level shifter.

I'm still waiting for my new Arduino to arrive so I can start programming a decoder, but honestly, I've never done this before and I cannot find any sample code that would at least give me a hint on how to tackle this.

I know what protocol it uses so my task here is to create a program that will sample the signal (read either digital 0-s or 1-s) and put them into some kind of buffer where I can process the data and convert bits to ASCII.

I've had a few ideas but I'm not sure if they would work because they seem too simple, they are probably utterly stupid... But I've never done this before, so yeah...

1. idea: Attach an interrupt to a digital pin. Once the interrupt is triggered, I would start reading the pin every 1/1200 seconds and adding the pin reading into the buffer for decoding.

2. idea: Also using an interrupt to trigger readings but now with the pulseIn() function. After that, deviding the time readings by 1/1200 and adding the calculated amount of bits into the buffer.

Honestly, I think this wouldn't work due to unprecize timing...

How should I tackle this problem? Has anyone done this before with an Arduino? :slight_smile:

Thank you.

I would attack this by first trying to compare the signal to standard serial protocols. If it has a start bit like RS232 Serial then it's very easy to just plug that into the Arduino's hardware serial.

If it looks more like I2C or SPI then use those modules on the Arduino.

If it really is totally unsuitable to the existing hardware then your first idea is how the Serial module actually works. Once it sees a valid start bit, it starts counting 1.5 bit periods to get the next sample into the middle of the first bit, then it counts 1.0 bit periods for the next 7 bits. If the transmitter sending rate drifts a little faster or slower than the receiver, then sampling the middle of each (anticipated) bit should get the right answer.

Some more-advanced serial protocols will 'oversample' four times in each bit period and then add all four together to make them 'vote' on what the actual bit was.

I'm trying to decode the old POCSAG protocol which is quite different since it uses a preamble of 576 bits to start the actual message transmission.

I like the idea of oversampling in order to minimize reading errors, I'll try that once I at least manage to read the preamble correctly.

OK, well at least you have reasonable documentation on what to expect.

How stable is the transmitter? How stable is the receiver? Those old pagers would have had a fairly high drift in their clocks based on temperature. It could be out in the sun on a hot day or hanging off your belt in the snow. That could throw the clock off by enough to be one whole bit out by the end of the message. So the long preamble allows the receiver to tweak its clock rate to match the transmission.

If you think you have stable clock speeds then you don't have to worry about changing your receiver's speed based on the preamble. Then either method you suggested will work.

If you don't have stable clocks at both ends then it would probably be easier to correct for it with the pulseIn() method. That will start the timing for each bit off the trailing edge of the previous transition and you hopefully won't get too far out of whack with a long series of all zeroes or all ones.

Additionally, I would not get too hung up on counting the preamble bits. Assume you always miss a few at the beginning. So you just need to sync the bit-clock to the incoming bits and then look for the first frame-sync pattern. That starts your word-clock which identifies bits 0-31.

Transmitters are pretty much government built towers (not sure of the correct word) for transmitting messages all over our country (95% of total country area is supposed to be covered). I'm pretty sure the transmitters are stable, as for the receiver (Swissphone DE506 pager with the digital signal output) I cannot really say. It sometimes happens that a character is missing or is invalid so I can assume clock speeds are a bit off.

Anyways, for starters, I'll take the ideal assumption that both speeds are stable and work from there on. :slight_smile:

Thank you for your help, shined some light on things I didn't even think of. :slight_smile:

Hey there, me again. :slight_smile: So I started programming a simple decoder which uses the "measure in the middle of one bit-period" idea we discussed earlier.

It works okay, well, it really doesn't... I even sync to the average bit-rate of the signal and it somehow reads every 10th bit wrong (reads it twice). I documented my code so you can better understand how I imagined my program to work. Signal is being generated by Nano and I'm trying to decode it on Mega2560.

The original (supposed) bit-period should be 833 microseconds. I measure it around 827-833 always... Meaning 5 microseconds deviation at most usually, which really shouldn't make the program read almost every 10th bit wrong... I must be missing something here... I tried implementing the oversampling algorithm but I got even worse results (will post it later if we cannot find a solution to this).

What am I doing wrong here? :o

Link to Pastebin for easier reading...

#define POCSAG_RECV 2 // receiver pin
#define SYNC_BITS 50 // number of bit periods to measure for calculating the average sample rate
#define MAX_MSG_BITS 1500 // when to stop reading the pin

volatile bool sync_start = false;
volatile bool start_sampler = false;
bool sample_rate_calculated = false;

int sample_rate = 0; // sampling rate

int measured_periods[SYNC_BITS]; // array of multiple bit periods for syncing the bit-rate

int bits_read = 0; // to count against MAX_MSG_BITS

String data; // where our data is stored

/// function prototype ///
int calculate_sample_rate();
/// function prototype ///

void setup() {

  pinMode(POCSAG_RECV, INPUT_PULLUP); // set pin 2 as receiver input

  Serial.begin(9600);

  attachInterrupt(digitalPinToInterrupt(POCSAG_RECV), start_sync_ISR, RISING);

}

void loop() {

  if (sync_start && !sample_rate_calculated) { // start the bit-rate syncing sequence

    for (int i = 0; i < SYNC_BITS; i++) { // save multiple bit-rates into an array

      measured_periods[i] = pulseIn(POCSAG_RECV, HIGH);

    }

    sample_rate = calculate_sample_rate(); // calculate the average

    sample_rate_calculated = true; // set this variable to true in order for the loop to skip next time

    for (int i = 0; i < SYNC_BITS; i++) { // unnecessary -> print all bit-periods to see

      Serial.println(measured_periods[i]);

    }

    Serial.println("--------");
    Serial.print("Sample rate: ");
    Serial.println(sample_rate); // print the average sample rate...
    Serial.println("--------");

    sync_start = false; // turn this variable to false in order to skip this loop next time


    // here we attach the interrupt so we can start reading bits with synced bit-rate
    attachInterrupt(digitalPinToInterrupt(POCSAG_RECV), start_sampler_ISR, RISING);

  }

  if (start_sampler) { // run after bit-rate syncing is complete...

    delayMicroseconds(sample_rate + sample_rate / 2); // delay one bit period and a half, so we start in the middle of the next bit

    while (bits_read < MAX_MSG_BITS) { // read data from receiver pin until data is full (MAX_MSG_BITS number)

      data += !digitalRead(POCSAG_RECV); // invert due to internal pull-up resistor enabled
      delayMicroseconds(sample_rate); // delay one bit-period in order to sample again near the middle of the next bit
      bits_read++; // increase counter to reach MAX_MSG_BITS

    }

    start_sampler = false; // disable the data reading loop...

    Serial.println(data); // print acquired data...

  }

}

void start_sync_ISR() { // ISR for starting the sync process when first bit reaches the receiver pin

  detachInterrupt(0); // remove interrupt in order to perform calculations

  sync_start = true; // start syncing in the main loop

}

void start_sampler_ISR() { // ISR for starting the sampling process in main loop

  detachInterrupt(0);

  start_sampler = true;

}

int calculate_sample_rate() { // function for calculating the average bit-rate for sampling.

  long measured_sum;

  for (int i = 0; i < SYNC_BITS; i++) {

    measured_sum += measured_periods[i];

  }

  return measured_sum / SYNC_BITS;

}

As you can see, this is just the preamble... it should be 0101010101... but I get errors every 10th - 15th bit.

Signal is being generated by Nano

This is from the POCSAG standard you referenced

The preamble shown in Figure 1, consists of 576 bits of an alternating 101010 pattern transmitted at a bit rate of 512, 1200, or 2400 bps. The decoder uses the preamble both to determine if the data received is a POCSAG signal and for synchronization with the stream of data.

You use pulseIn() on the Mega to detect the length of 50 HIGH pulses out of the train. The pulse length is determined to be 833 microseconds which looks like the 1200 bps.

What is the code used to generate the signal from the nano?

The heart of the measurement is here.

if (start_sampler) { // run after bit-rate syncing is complete...

    delayMicroseconds(sample_rate + sample_rate / 2); // delay one bit period and a half, so we start in the middle of the next bit

    while (bits_read < MAX_MSG_BITS) { // read data from receiver pin until data is full (MAX_MSG_BITS number)

      data += !digitalRead(POCSAG_RECV); // invert due to internal pull-up resistor enabled
      delayMicroseconds(sample_rate); // delay one bit-period in order to sample again near the middle of the next bit
      bits_read++; // increase counter to reach MAX_MSG_BITS

    }

The initial delayMicroseconds of sample_rate + sample_rate/2 to get to mid period and then with the delayMicroseconds(sample_rate) for reading is evidently not staying in sync with the pulse train.

Another approach might be to start a hardware timer in the start_sampler isr with a period of 833us(1200Hz) and read the data bit in the overflow interrupt. You know there is a high bit when the start_sampler interrupt triggers and you are at the beginning of a clock period. With the interrupt latency and some other processor cycles to get a timer started you should always be reading near the start of a data bit rather than in the middle.

The TimerOne or TimerThree library might prove useful.
https://www.pjrc.com/teensy/td_libs_TimerOne.html

You are using delayMicroseconds() for timing.

How do you know how much time the rest of your code takes?

I would record micros() at the start and use that for timing.

You are using delayMicroseconds() for timing.
How do you know how much time the rest of your code takes?

He is reading the data in a pretty tight while loop.

while (bits_read < MAX_MSG_BITS) { // read data from receiver pin until data is full (MAX_MSG_BITS number)

      data += !digitalRead(POCSAG_RECV); // invert due to internal pull-up resistor enabled
      delayMicroseconds(sample_rate); // delay one bit-period in order to sample again near the middle of the next bit
      bits_read++; // increase counter to reach MAX_MSG_BITS

    }

Can someone evaluate the compiled code of the while loop to show number the processor cycles?

If digitalRead() is almost 5us. Appending data to a String is ? The while comparison and the bit_read increment are likely only a few cycles. If the entire overhead of the while loop is 10 microseconds, is that enough to get out of sync in 10 pulses?

If instead of coming in at the middle of an 833 us period, the first reading is at 500us due to processor time, then the timing has to slip by 333us for the reading to be out of sync.

Morgan has correctly pointed out a flaw in the code, but I'm not sure it accounts for the observed behaviour.

Without a true clock pulse for the serial readings, there may be a need to sync the readings to a rising edge at frequent intervals.

Thank you both for your answers. I tought the same. I'll measure the while loop time with micros() and report back so we can see what happens.

EDIT:

Okay, so I measured the time it takes the while loop to go through once.

How I measured (start_time, end_time are defined at the top):

 while (bits_read < MAX_MSG_BITS) { // read data from receiver pin until data is full (MAX_MSG_BITS number)
      if(bits_read == 0){
        start_time = micros();
      }
      //data += !digitalRead(POCSAG_RECV); // invert due to internal pull-up resistor enabled
      delayMicroseconds(sample_rate); // delay one bit-period in order to sample again near the middle of the next bit
      bits_read++; // increase counter to reach MAX_MSG_BITS
      if(bits_read == 1){
        end_time = micros() - start_time;
      }
      
    }

The measurement was 876 microseconds... Removing the data += expression makes it go to 840 microseconds... So adding data takes 36 microseconds and everything else around 12 microseconds. No wonder I miss every 10th or so bit.

I'll try implementing a "circular" correction, but that isn't a stable way to fix this...
Any ideas on how I should store my bits to minimize time?

EDIT 2:

The Nano is basically a "black box". There's someone who already made this but didn't make the source code available, just the .hex files so you can upload it and use it.

EDIT 3:

Holy mother of god, applying the correction time now reads every bit correctly. However, this isn't a stable fix, I'd have to write a function that every time there is a message, it calculates the correction time. Any other, simpler (not easier) ways to do this? I know that writing such a function isn't hard but any other elegant solutions? Remember, I have to store ~1000 bits.

I am suspicious the original data receiver used a phase locked loop to continually adjust the clocking frequency. You have no guarantee the data transmitter is not varying the data frequency a bit. The old receiver didn't care, as it was designed to self-adjust to match the bit transitions. That's the way the old phone data modems operated.

Paul

Those old pagers would have had a fairly high drift in their clocks based on temperature.

They jolly well didn't when I worked for Philips pagers 20+ years ago! The rf used a 2.5ppm crystal ( from NDK - superb) tuned to within 0.1 ppm, and the decoding processor ( a cmos 8051 derivative) had a 10 ppm 12MHz crystal, plus a 32kHz xtal used in sleep mode.

We tested them over -20 ... +50C with no problems. The low temperature limit was the 1.5v AAA cell.

The POCSAG protocol required the processor to wake up about once every 2 seconds and determine whether the datastream had a message for it.

Battery life about 6 weeks.

Allan

allanhurst:
They jolly well didn't when I worked for Philips pagers! The rf was a 2.5ppm ( from NDK - superb) crystal tuned to within 0.1ppm, and the decoding processor had a 10ppm crystal.

We tested them over -20 ... +50C with no problems. The low temperature limit was the 1.5v AAA cell.

Allan

Probably true for the RF. I was referring to the data clocking.

Paul

You have no guarantee the data transmitter is not varying the data frequency a bit.

They were extremely accurate. <1ppm. We built them, too.

And the datastream, being basically Manchester encoding, is very forgiving of decode timing inaccuracies. It also had Hamming FEC - better than a simple checksum or CRC.

One of the managers at Philips was on the team which devised the system while he was at the PO.

One of the tests for pagers was that it still had to work perfectly after being dropped 12 times
( twice on each 'face' ) from 6 feet onto solid concrete. Try that on a modern smart phone!

I knocked up 12 samples once for a salesman to demonstrate to clients - and he brought one back after a few weeks saying it didn't work. So I took it apart and had a look.. No obvious damage except that the loop antenna was bent. So I bent it straight, re-tuned it, and it worked fine.

When I gave it back I asked him if he had any idea how this might have happened.

He admitted one of his kids had played tennis with it against a brick wall.....

Pretty tough.

Allan

At those kinds of ppm accuracies, the doppler effect from riding in a moving car might be the largest source of inaccuracy.

And at 2400 baud completely trivial.

Even the early 2G system specified a maximum vehicle speed of >300 km/hr ( 0.0003 x the speed of light) - because this is faster than the French TGV - But not as fast as an airliner!

(Hence a 2G's terminal used a reference VCTCXO, which had to be pullable that much - normally a magic 13MHz device by TDK, Murata etc. The spec required 0.01 ppm accuracy, and we normally achieved at least 10x better than that.)

The paging system is obsolete in densely populated countries, because of the availability of cheap 2G etc mobile phones, which are far more versatile.

But the cost of the infrastructure is high, since range is low, so you need many base stations , and hence many subscribers to make it economically viable.

A pager transmitter had a usable range of often >100 miles, which meant the infrastructure is cheap. Hence they are still popular in scarcely populated areas of some countries where it is not viable to provide 2G etc coverage.

Allan

Mr Google led me to this code on Git Hub for pocsag decoding which uses the TimerOne library to control the clock for bit reading as previously suggested. You should be able to get some ideas, or maybe it will even serve your needs.

I wrote something similar for connecting burglar alarms years ago using mains signalling for a 'neighbourhood watch' scheme.... see if I can dig out the code.

Allan

Well, my first solution worked on and off. I then implemented the Timer1 library as discussed earlier and now it works constantly. I even tried de-syncing the sample timer by a few hundred microseconds to see how stable it is and it had no effect. Less code too. :slight_smile:

Okay, I'll have to take it back again... Euphoria got the best of me again...

I now have two problems... I mean, I could maybe explain the second one but the first one I cannot figure out for the love of god...

I tried to minimize the code and delays so this is what I wrote:

#include <TimerOne.h>

#define POCSAG_RECV 2
#define MSG_LEN 1500

volatile bool signal_detected = false;

int bit_period = 833;
int half_bit_period = bit_period / 2;

volatile char data[MSG_LEN];
char non_volatile_data[MSG_LEN];

volatile int counter = 0;

void setup() {
  // put your setup code here, to run once:

  pinMode(POCSAG_RECV, INPUT_PULLUP);

  Timer1.initialize(bit_period);

  Serial.begin(9600);

  attachInterrupt(digitalPinToInterrupt(POCSAG_RECV), signal_detected_ISR, RISING);

}

void loop() {
  // put your main code here, to run repeatedly:

  if (signal_detected) {

    Timer1.attachInterrupt(sample_ISR);

    signal_detected = false;

  }

  if (counter == MSG_LEN - 2) {

    Timer1.detachInterrupt();

    data[MSG_LEN - 1] = '\0';

    for (int i = 0; i < MSG_LEN; i++) {

      non_volatile_data[i] = data[i];

    }

    Serial.println(non_volatile_data);

    counter = 0;

  }

}

void signal_detected_ISR() {

  signal_detected = true;

}

void sample_ISR() {

  data[counter] = !bitRead(PINE, 4) + '0';
  counter++;

}

My first problem... first three bits read by the sample_ISR() are read wrong... they all should be alternating (0101...)... Sometimes only the first two are wrong. I tried adding a delay to skip a few bits and then start reading but they are always read wrong for some reason. Why?

My second problem... Every now and then, I read a single bit wrong for some reason (adding a delay to sample in the middle of the bit period makes it worse). One bit wrong and you get an error in the message. What am I doing wrong here?