Hello everyone,
I'm currently exploring the performance of packet sending strategies in LoRaWAN, particularly focusing on the impact of edge computing on delay reduction. Here's the setup:
- No Edge Computing Scenario:
- Devices send packets every 30 seconds, independently of each other.
- Edge Computing Scenario:
- Devices equipped with edge computing capabilities read sensor values every 30 seconds.
- They aggregate these values over a set number of readings (let's say, N readings).
- Once N readings are reached, the devices calculate the average value.
- Finally, the devices send a packet containing the average value, resulting in fewer transmissions compared to the no-edge scenario.
Now, I'm curious about the following:
Does sending packets every 30 seconds (no edge) vs. every 5 minutes (with edge computing) actually decrease delay?
I'd appreciate any insights, experiences, or suggestions regarding this topic. Has anyone experimented with similar setups or encountered challenges in optimizing LoRaWAN packet sending strategies?
Looking forward to your input and discussions!
Thank you.