Need some advice on a multi-chip project

I'm a network admin by trade and often have a stack of routers (or such) with RS-232 console ports that I need access to. I'd like to build a serial concentrator with Ethernet (telnet) access. The design would use, say, four ATtiny2313s connected to MAX232s for the slave ports, each providing local buffering and delivering the raw bytes upstream to probably an ATmega328P, connected to its own MAX232 (for upstream console) and a WizNet Ethernet interface. The master AVR would provide a menu for IP configuration and selecting a slave port. (Or I might stick to sequential TCP port numbers for direct access.) I'm thinking of having them communicate via an i2c bus, but we'll see. Depends on how well it could keep up with 4x115200 I guess. But that's not the point...

To make this interesting (i.e., just for giggles and the learning opportunity), as well as potential ancillary timing benefits, I'd like to use a single clock source distributed to all the AVRs. I intend to use vanilla AVR C, thus serial-optimized clock speeds, since Arduino compatibility is no issue. I do want to retain ICSP functionality, so the scheme has to work where any given chip is being programmed in-circuit. The boards will be their own PCB design. (Maybe even self-programming from the 328 to the 2313s.. hmmm..) Ability to pre-test on a breadboard would also be a plus.

So, how would you do this? I know next to nothing about oscillators, just a bit about how to pick crystals and load caps. Buffering and distribution is a new game, and I want to understand it better. The AVRs have a variety of clock source options, and I don't think pins will be at a premium in this case. Which approach is suited to this application, and why? Any useful articles or links?

What does having the chips run from a common clock provide other than the hassle of distributing 16 MHz cleanly and dealing with unintended emissions and likely loss of signal integrity?
Cards will be communicating via RS232 asynchronously, or via ethernet and its independent clock.

the 2313 is a very limited chip in terms of ram (128 bytes), and if your stringing clock signals all over the place (which should be interesting and very error prone) IMO I would look at a mega which is 1 chip, with 4 serial ports and 8k of ram and 1 shield

Excellent project! Commercial serial concentrators always seem to be so expensive.

What speed are you running these serial ports at? If 9600, I would just use the internal oscillators on the ATTinies. There's a TinyTuner utility out there that should make them plenty accurate enough for 9600. I've even had success at 38,400 with an untuned internal oscillator, but you should tune for reliable results.

I believe the default I2C rate is 100kHz and can be bumped up to 400kHz.

CrossRoads:
What does having the chips run from a common clock provide other than the hassle of distributing 16 MHz cleanly and dealing with unintended emissions and likely loss of signal integrity?
Cards will be communicating via RS232 asynchronously, or via ethernet and its independent clock.

Fair question. First, it provides (as I said) an opportunity to experiment with clock distribution. If I can get the design to work well, I know I'm at least capable of handling MHz signals without ruining them, and I learn a little bit about oscillators. Feather in my cap for that, and that's entirely worth it to me to try. Yes, there are more efficient ways to achieve the project goal -- I understand that, but that's not the point. I'm not trying to cut costs on a production device. I'm trying to learn a particular skill. It's an entirely different goal.

Osgeld:
the 2313 is a very limited chip in terms of ram (128 bytes)

I'm aware. It's the smallest chip with hardware serial IIRC. That's why I chose it. I might end up trying to code its firmware in assembly since it serves a limited function. That's another thing I want to get my hands dirty with.

Osgeld:
if your stringing clock signals all over the place (which should be interesting and very error prone) IMO I would look at a mega which is 1 chip, with 4 serial ports and 8k of ram and 1 shield

That would be a better solution, but it would defeat the point of the exercise. The challenge is the primary goal, with a side benefit of having a working serial concentrator.

I would eventually like to start working with ARM and FPGA designs. If I'm going to go that route, I'd better get used to working with MHz signals. This is step one.

tylernt:
Excellent project! Commercial serial concentrators always seem to be so expensive.

What speed are you running these serial ports at? If 9600, I would just use the internal oscillators on the ATTinies. There's a TinyTuner utility out there that should make them plenty accurate enough for 9600. I've even had success at 38,400 with an untuned internal oscillator, but you should tune for reliable results.

I believe the default I2C rate is 100kHz and can be bumped up to 400kHz.

Forgot to reply to this one. Typical would be 9600, but the occasional appliance defaults to 115200, so support at that speed is a mandatory goal. For that reason, I'm looking at clock speeds that are multiples of serial. 1, 8, and 16MHz won't cut it here, and since I don't care about Arduino libraries and internal oscillators in this case, there's no reason not to go with the better ~14 or 18MHz options instead.

So far I'm thinking this:

Use a circuit like this to set up an oscillator, then distribute that to Xtal1 pins (with Ext Clock fuse settings) on each AVR. From what I've read, the crystal circuit needs unbuffered inverters to oscillate, but should I buffer this output before branching off, or no? This is where I'm over my head a little...

Do you need to talk to all the routers at once? Or just log into one, configure it, and leave (like those boxes that switch keyboards between devices)?

If so, you could use an 8 or 16-port multiplexer to switch Tx/Rx and keep things really simple.

As for the clock, sounds like an interesting learning exercise, but I suspect the clock signal will degrade over longish cable runs and introduce problems.

SirNickity:
Forgot to reply to this one. Typical would be 9600, but the occasional appliance defaults to 115200, so support at that speed is a mandatory goal.

Ok. If you want all 4 ATTinies to do 115200 simultaneously and connect back to your Atmega via I2C, you'll need to skip the 400kHz mode and go straight to 1MHz -- see: Arduino Playground - HomePage

For the post-design functional widget goals, the upstream serial console (connected to the master AVR) would support switching between ports, but since that can / will operate concurrently with Ethernet access, the downstream ports will all have to be online at all times. Even just for Ethernet, IIRC the WizNet interface I have on-hand supports four simultaneous sessions. It would be nice if they could be individual telnet sessions to each port rather than simultaneous clients to one active port.

Hey, if I wanted simple I'd buy more USB-to-serial adapters and run the blink sketch on an Uno. :wink: This is about stretching the comfort zone a little.

Shouldn't be long cable runs anywhere. If I can get it stable on a breadboard for proof of concept before sending off for PCBs, that's a nice plus. Otherwise, I have half a PCB board used up for another project, and I'm just waiting for a good use of the other half before I send in the order. If this project fits the space and hole count restrictions, I'm good to go! Then the maximum distance is only as long as the trace length from the oscillator to the farthest IC. With end termination, a ground plane underneath, straight-as-possible traces, and minimal crosstalk opportunities to other nearby elements, I think I'll be OK.

tylernt:
Ok. If you want all 4 ATTinies to do 115200 simultaneously and connect back to your Atmega via I2C, you'll need to skip the 400kHz mode and go straight to 1MHz

Yeah.. although looking at the mega328 datasheet, it's only rated to 400kHz. Hmm.. that could be a problem. I might need to consider either a parallel or high-speed SPI bus instead.

SirNickity:
Yeah.. although looking at the mega328 datasheet, it's only rated to 400kHz. Hmm.. that could be a problem. I might need to consider either a parallel or high-speed SPI bus instead.

I was perusing digipot datasheets the other day and I noticed that some of them support daisy-chaining SPI so you don't need a dedicated SS line for each. I assume you'd put a packet on the bus with an a destination ID prefix and the other devices on the bus would ignore the next byte if it wasn't intended for them. Of course the extra overhead halves bandwidth if you put a prefix on every byte, but with ~4MHz to play with I think you have enough.

I know you're not using digipots but perhaps the same technique can be used for AVRs.

Depends on how well it could keep up with 4x115200 I guess.

4*115200bps without DMA or at least deep (HW) FIFOs is pretty hard. Note that SPI or I2C won't be any easier; on an AVR they're still essentially one-interrupt-per-byte interfaces. An Xmega might work better as a core cpu.

I'll note, somewhat sadly, that individual single-port ethernet(tcp)/serial devices like the Wiznet WIZ108SR, or even the more polished Lantronix products, have gotten quite a bit cheaper than large multiport terminal/console servers used to be. Or you can get an old 16-port cisco-2511rj or equivalent on eBay for about $300. Sigh.

westfw:

Depends on how well it could keep up with 4x115200 I guess.

4*115200bps without DMA or at least deep (HW) FIFOs is pretty hard.

I beleive the ATTinies can have most of their RAM dedicated to a receive from RS232 / send to Atmega buffer. Most of the data coming from the ATmega to the ATTinies will be single characters typed by a human at relatively sparse intervals.

I see two ways to keep from filling the ATTiny's tiny receive from Atmega / send to RS232 buffer. One would be letting the ATTinies send flow control bytes back to the Atmega. The other would be scheduling outgoing bytes on the Atmega at a rate slightly below the serial baud rate of each ATTiny.

You're absolutely right that the majority of the activity will be at typing speed. However, showing and pasting configuration snippets, viewing logs, etc., all have the capability of filling up the UART pipe. The chances of all four ports running at full speed is slim to none, still I'm going to try and get as close to that level of performance as I can.

A parallel bus is looking more and more attractive. In theory the WizNet 5100 IC supports parallel data, but I will have to check whatever module I bought to see if that interface is broken out to pins. In the worst case where all four terminals are going full-tilt, each port is being accessed by a telnet session, and one port is being monitored on the local console port, the master could signal a slave device to write a byte to the data bus, while itself writing bits to the address bus, and sampling the data bus to hand off to its own UART. No one device would have to cope with the full four queues at once, but the master device would have to coordinate it all...

westfw -- no kidding about the device costs. Last time I checked into getting a 16-port Raritan console server it was over $1k. My device isn't going to have multi-user access controls or a pretty web interface, but I'm hoping the cost comes in much lower. :wink:

Any thoughts on the clocking though?

2313...It's the smallest chip with hardware serial IIRC.

ATtiny1634, 20 pins, 2 USARTs, 1k SRAM, 16k flash


Rob

I beleive the ATTinies can have most of their RAM dedicated to a receive from RS232 / send to Atmega buffer.

Sure, but then what? You still have to transfer it to the bigger AVR ((a byte at a time, one interrupt per byte, over an interface that isn't really any more advanced than UART), and then IT has to transfer to the data to he network. Having the "little" AVRs buffer data solves latency issues, but not throughput issues. You have about 100k Bytes/second you need to pump, which is only about 160 instructions per byte at 16MHz.

This is another reason that USB has displaced UARTs on PCs.

westfw:
You have about 100k Bytes/second you need to pump, which is only about 160 instructions per byte at 16MHz.

Hm, my calculator says 64.8KB/s (assuming 8-N-1) if you go ahead and use traditional SS SPI (no packet prefixes). But you're right, that adds up to a lot of data. Maybe only allowing one port to be at 115200 at a time is an acceptable design constraint?

One possible way to get more symbols through a serial connection is varicode on the ATTiny <-> Atmega bus. Not sure if the CPU overhead for decoding is a net gain or loss though. Varicode is just cool, if nothing else. :slight_smile:

SirNickity, you mentioned a parallel bus to the Wiznet -- I'm not sure that's where the parallel bus should be. The ATTiny 2313 has pins PB0 through PB7 that (I presume) could be used as an 8-bit parallel bus to the Atmega. On the Atmega, PD0 though PD7 could be used (leaving hardware SPI port B free for the Wiznet). Further, with an additional clock pin, data could be sent in bursts of several bytes at a time from each ATTiny in turn to cut down on overhead. Don't use interrupts; have the Atmega poll each ATTiny round-robin and let them buffer between bursts. Buffer again on the Atmega side to make transfers to the Wiznet less frequent and with less overhead.

The Wiznet may be able to handle SPI at clock/2 (9.216MHz) as the datasheet appears to show 80ns for read cycle time and 70ns for write cycle time? I think that works out to 109 Atmega ns per bit, dunno if that's pushing the tolerance on things or not.

Excellent project! Commercial serial concentrators always seem to be so expensive.

Just a warning... As you are working in a commercial company as an employee, you must have permission to install any custom designed hardware on their network. In some companies, you may need AVP or VP level written approval.

This is a security and legal issue: your company "could" claim your design as their own. Since you are deploying in a commercial environment, a 'console' maker could claim patent infringement and sue for loss of sales, infringement, and damages. You may even find yourself being sued by your own company to recover legal fees.

You have been warned. I worked as a Senior Technical Architect in a U.S. Fortune 20 company and you would have your pants scared off by the sh-t I have seen!

Ray

Say... if the ATTiny/Atmega clocks are all synchronized, does that mean bits/bytes can be transferred in a single clock cycle instead of two?

a 'console' maker could claim patent infringement and sue for loss of sales, infringement, and damages.

This seems unlikely. "console concentrators" have prior art dating back to statistical multiplexers, and even at the height of the .com boom they floated by without much contention (aside from DEC wanting a license for LAT, and three different companies having patents relating to V.42bis compression, and ARAP being pretty proprietary, and ...) Your basic Internet Terminal Server (the BBN/Honeywell TIP) was a DoD project and so probably had additional restrictions against proprietaryness.
(I was sort-of "Mr Terminal Server" at cisco for most of the time that they sold terminal servers. ASM to AS5850... It was not one of the more contentious areas of development, except for perhaps the modem technology.)

A thriving discussion! Awesome. Thanks for all the input so far guys. OK, here we go...

Graynomad:
ATtiny1634, 20 pins, 2 USARTs, 1k SRAM, 16k flash

Ah, thanks! I was on the fence about the 2313's RAM. It should be enough for what I want it to do, but if I need more, I know where to look. Not sure whether I want to chicken out and take advantage of two hardware serials per chip, or stick to the discipline of trying to synchronize multiple devices just for the sake of it. Something to think about...

You have about 100k Bytes/second you need to pump, which is only about 160 instructions per byte at 16MHz. ... Hm, my calculator says 64.8KB/s (assuming 8-N-1) if you go ahead and use traditional SS SPI (no packet prefixes).

115.2Kb/s / 8 * 4 = 57.6KB/s raw data, potentially -- again, in theory, if all ports are at 100%. In full-duplex, that would be 115.2KB/s. Plus any preamble or other overhead.

tylernt:
SirNickity, you mentioned a parallel bus to the Wiznet -- I'm not sure that's where the parallel bus should be. The ATTiny 2313 has pins PB0 through PB7 that (I presume) could be used as an 8-bit parallel bus to the Atmega. On the Atmega, PD0 though PD7 could be used (leaving hardware SPI port B free for the Wiznet).

More info on that: I bought a standalone WizNet module -- the WIZ811MJ -- which is a snap-in module kinda like the Zigbee stuff. Basically a life-support board for the raw IC. The WizNet W5100 IC itself can use either SPI or a parallel interface (controlled by a signal pin), and the parallel bus is indeed broken out on the module. It uses 8 data bits (naturally), and a 14-bit address bus though, which means the big AVR would probably need to be something with more pins, like an ATmega1284. (Using shift registers here would defeat the point of going parallel.)

I haven't completely thought this through yet, but it should be possible to signal time-slices (using a write-enable and read-enable signal to the tiny) so the tiny writes its data byte to the bus while the mega writes the address bits for the WizNet, then tells the WizNet to read the data bus. This prevents having to transfer the data from tiny to mega, then from mega to WizNet. Plus, if the mega is going to echo the data to its local upstream serial port, it can grab the byte while it's already there on the bus. Make sense?

tylernt:
Further, with an additional clock pin, data could be sent in bursts of several bytes at a time from each ATTiny in turn to cut down on overhead. Don't use interrupts; have the Atmega poll each ATTiny round-robin and let them buffer between bursts.

Again, I haven't completely fleshed this out in my head, but I was thinking the same thing. If the tiny signals data-available via a HIGH pin, I can read that, set its write-enable pin HIGH, wait for the data to appear on the bus, write the address bits and signal the WizNet to read the byte, strobe the tiny (for the next byte) until the data-available pin goes LOW (or a maximum of say 8 bytes to keep bus contention low), then drop write-enable, set read-enable if there's data to be sent TO the tiny, follow pretty much the same procedure in reverse, then on to the next tiny, etc. The WizNet supports auto-increment on the address bus, so with strobing, multiple bytes can be sent pretty fast.

The tiny can just poll constantly since it won't have much else to do. I wouldn't even have to use serial interrupts. It could just poll the signal pins, poll the UART, goto 10.

I would have to write this as a short program and see what the instruction count looks like, but it seems reasonable. You guys seem to have the cycle counts right at the tip of your tongue. (Respect.) 8) Remember, clock speed can be 14.745 or 18.432MHz for serial compatibility. On a side note, neither of these seems to support an even divisor below 9600. Not sure that's a big problem in reality. I guess I could set a system clock divider on the tiny for low-speed comms.

tylernt:
Say... if the ATTiny/Atmega clocks are all synchronized, does that mean bits/bytes can be transferred in a single clock cycle instead of two?

Exactly. XD One of the reasons this practical project dovetails nicely with the synchronized clock exercise.

mrburnette:
Just a warning... As you are working in a commercial company as an employee, you must have permission to install any custom designed hardware on their network. In some companies, you may need AVP or VP level written approval.

That's a good point, and thanks for bringing it up. My intended use is more for field, lab, or initial configuration work rather than a fixed install. I.e., this isn't going in an equipment rack. We have actual terminal servers and routers for that. At my desk, though, I have a stack of routers and switches. The routers stay there for labs and configuration proving. On the switches, I'm upgrading the firmware and clearing the configs so they can be put back in stock. Also checking one for possible RMA. That's the kind of stuff I want to use this device for. The Ethernet end will probably go directly into my laptop, although I do have a jack with an appearance of the "public" Internet network. On one hand, there's not much liability for that network -- it's the Internet. OTOH, my device isn't meant to be secure, so I would have to be exceedingly careful about having it exposed to the wild if there's anything behind it that I wouldn't want similarly exposed.