Go Down

Topic: Beta version of GLCD library version 3 (Read 48605 times) previous topic - next topic


Aug 06, 2010, 03:30 am Last Edit: Aug 06, 2010, 06:07 am by bperrybap Reason: 1
Next I'm going to study the glcd library and hopefully find a way get this T6963 display working with the glcd lib.
So far I think at least the functions readData, writeData, writeCommand, WaitForStatu, goTo, init need to be adapted for this controller.

Those are the primary functions that are needed to make all the higher level class functions operate.
But technically it is only readData(), writeData(), Init(), and gotoXY() as the other functions are support functions used by those 4 functions.

For those following along:

While providing the same yet expanded functionality
the glcd library code implementation is very different under the hood compared to the ks0108 library.

The ks0108 library was a single class with all the code
lumped into the ks0108 class.

The glcd library is broken into 3 classes
- glcd (in glcd.cpp)
- gText (in gText.cpp)
- glcd_Device (in glcd_Device.cpp)

The glcd class provides graphic functions as well as access functions into both gText and glcd_Device.
gText is obviously for the text areas but calls into glcd_Device.
glcd_Device handles the low level hardware i/o.

Both glcd and gText currently make some assumptions about the underlying display capabilities and orientation.
The biggest ones being:

- glcd memory is readable as well as writable.
- glcd memory to display pixel mapping consists of 8 pixels per byte
 (glcd page) and pixels in that page are 8 vertical pixels on the display.
- x & y coordinate values will not be larger than 255
- 0,0 origin is upper left corner of the display.
- Font data is stored in the same bit/pixel order as display memory.
- a read from memory does not advance the memory address while
 a write to memory does  advance the memory address.
  This allows fast read/modify/write operations as the upper
  layers simply read a byte, modify it, then do a write and can then
  do the next read without having to worry about setting memory
  It also allows fast write updates for things like bitmaps and font data
  as the code can do back to back writes and the address increment
  is handled by the glcd_Device layer or the hardware itself.

All the functionality currently provided by the glcd library (graphics & text) lives on top of these capabilities/assumptions.
Displays that cannot support these capabilities/assumptions
either directly or with library code will be much more difficult to get working with the glcd library.

Displays that can provide those capabilities and operate within those assumptions shouldn't be that difficult to get working with the glcd library.

Conceptually, glcd_Device's job is pretty simple.
Read & Write glcd memory and provide the ability to alter
the address where memory is read/written based on x & y coordinates.

The glcd_Device code may be a bit painful to get up to speed on.
It uses many macros and conditional ifdefs. Many of these were to make the
code more portable, more easily adapted to other displays, or to increase
the digital i/o performance by avoiding the Arduino digital i/o library functions
(digitalWrite(), digitalRead(), pinMode() etc...) and talking directly to the AVR ports/pins.

While it would be possible to add in a new device by replacing
the readData(), writeData(), GotoXY(), and Init(), it would be
better to take a look at the structure that is already in place for supporting other displays to see if that can be leveraged.
There is a collection of macros and defines which are used by the glcd_Device code that are defined in the device directory and in glcd_io.h
While a structure is in place for alternate displays, sometimes a new display will have a certain need that is unaccounted for in the glcd_Device code.
Should that occur, it can usually be handled with new or additional code that is conditionally compiled in based on  macros or defines in device specific header.
For example, the Init() code already looks for a macro called
glcd_DeviceIinit() which can override the default init code for
a given chip.

The reason that it is best to look at using the device macros, is that this will allow the library to support a new device and still support all the devices already supported through the existing configuration mechanism.

While only a general overview, hopefully it provides some insight as to the inner workings of the glcd library.

--- bill


Thanks for the extensive write up of the glcd library inner workings.
I guess I'll be reading for the coming few days hoping to see the light somewhere at the end ;)

From what I've seen so far is that the T6963 can do what the KS0108  can do and has some additional features like Cursor style setting, and display mode setting (OR,AND,XOR).
The hardware interface is slightly different (no SCEL pins and separate pins for Read and Write and a Font select pin)
For now I will hardwire Fontselect set for 8x8 font and for splitting the RW control into separate R and W lines I'm thinking of using a simple transistor inverter.


The easiest/quickest way to get the T6963 integrated into the glcd
library would be ignore the text capabilities for now.
The glcd library doesn't have any code to take advantage of this
capability as it assumes the display is a simple bitmapped memory
display and handles all the character rendering internally.

Michael and I have both taken a look at a few displays that
have hardware text support, and while it would be possible
to add support for it,
adding in support for hardware text capability won't be a simple drop in.
This is because the gText class currently assumes fonts are rendered
from local font data and that it has total control over any
pixel in the bitmapped display. This control allows gText to do some things that are likely to not be possible with hardware text.

For example the ability to render characters on any pixel boundary.
The displays that Michael and I have looked at so far tend to force the character glyph to land on certain boundaries.

Also, gText allows configuring multiple text areas each with
each allowed to select its own font and
supports line wrapping and scrolling within those user defined
text areas.

Overall, the issue really isn't so much putting in code to support
hardware text capabilities but how to update the API to support it
as some of the text capabilities will be different when using hardware
text support.

--- bill


Ok, Bill. Thanks for the advice.
In this case less is better (for me that is). I'm still in the very early stages of trying to understand what you guys have created.
I always find it difficult to get into someone else's mind to follow their way of thinking. An extensive library as this doesn't make that easier.



Using a Mega and a no name GLCD I am hitting 18.06 fps with no problems.  I am going to use it in my graphing sketch to how it does, I will get back with an update.



Thank you for this library update.

Fixed all my problems with mega and GLCD. I was having problems drawing in glcd in a single screen "sweep". It always showed missing pixels, and after a while on the sketch running without any input, it started to add random dots in screen. I had already checked my wiring, but didn't fix it.

Now, just changed the library to this one, and my sketch worked almost out of the box (probs with drawvertline and drawhorizline).
Now I draw the screen correctly with one single pass and no more annoying dots showing up after a while.

Thank you, great work!

By the way, you have said that colour GLCD will be supported in future.

I've bought this one:

It uses SSD2119 controller.

Do you think that this screen can be included in the GLCD library?

Right now I'm building my project on the 128x64, but in the future I want to move into this higher resolution screen.


Aug 20, 2010, 03:49 am Last Edit: Aug 20, 2010, 03:56 am by E.U.A.. Reason: 1
I wanted to ask that if it's possible to update library to leave i2c ports alone on Arduinos? Analog Port 4 used for glcdEN and blocking i2c communication. Isn't it better to have i2c ports working with GLCD library? Since there is port 12 available for the job?

I think I2C port is still useable if we share it with i2c If only we pull SCL (Analog port 5) low while we don't use I2C port...
i2c bus require START or STOP signals requires SCL is high (which is idle state of i2c port). Git this might be complex. Instead we can put a definition if wire library used, we could drive low SCL port, this automates the sharing port... This is also true for slave devices because they allowed to drive SCL low (clock skew) for make some jobs internally. So Port 4 sharing mechanism or moving Analog Port 4 to DigitalPort 12 is good movement I think.

It's weird thing happening on GLCDDemo code. When I compile this for run 16Mhz CPU, It's run 15.08 fps. but When I change XTAL with 20 MHZ speed gets low! How could it be?
Also if you compile program for 20Mhz CPU speed, it runs faster on same 16Mhz XTAL. It's not comes logic to me... Any ideas about it?
Speed table:

Code Compiled For | XTAL Speed | GLCDDemo FPS
16Mhz| 16Mhz| 15.08
16Mhz| 20Mhz| 13.46
20Mhz| 16Mhz| 18.56
20Mhz| 20Mhz| 16.87

Notice, I also updated 20Mhz delayMicrosecond function hack. So, on 20Mhz compilation, delayMicrosecond function waits same time as 16Mhz variant (off course when you use 20Mhz Xtal )


Aug 20, 2010, 07:27 am Last Edit: Aug 20, 2010, 07:28 am by mem Reason: 1
You can configure the pins to whatever you want without changing the library. The pin assignment  used in the configuration files in the download are for backward compatibility with the first version of the library so users don't have to rewire a working system when upgrading software.

But if you want to reconfigure to free up the i2c pins you can make that change - note however that using pin 12 would conflict with SPI. There are no 'free' pins on a standard arduino board.

The glcd library does not use delayMicroseconds. It use the Arduino delay function for millisecond delays and a routine in delay.h for nanosecond delays. Both these functions take the #define for CPU speed into account.

The smaller than expected  increase in performance you are seeing when running on 20Mhz is due to the limitation of the GLCD performance. The library checks to see if the  controller chip is ready before doing IO to the GLCD controller chip.


I think pin SPI is already blocked since Pin 11 and 10 required for SPI. Anyway, I think you missed the point about weird fps... There is no increase with using 20Mhz xtal that I wanted to say.

With 16mhz Xtal, I got higher FPS, with 20Mhz Xtal, I got lower fps. Independent from compilation of code for CPU speed. Please look the table again. It comes weird to me...


Aug 21, 2010, 05:23 am Last Edit: Aug 21, 2010, 05:24 am by mem Reason: 1
Anyway, I think you missed the point about weird fps... There is no increase with using 20Mhz xtal

With 16mhz Xtal, I got higher FPS, with 20Mhz Xtal, I got lower fps


Code Compiled For  | XTAL Speed  | GLCDDemo FPS  
16Mhz  | 16Mhz  | 15.08  
20Mhz  | 20Mhz  | 16.87  

It looks to me like you are getting a 10% increase in speed when using the 20Mhz crystal. The reason why the increase is not proportional was given in my previous post.


I assume you meant the middle 2 entries. (#2 and #3 in this list)

Code Compiled For        | XTAL Speed | GLCDDemo FPS
1)            16Mhz       | 16Mhz       | 15.08
2)            16Mhz       | 20Mhz       | 13.46
3)            20Mhz       | 16Mhz       | 18.56
4)            20Mhz       | 20Mhz       | 16.87

Those do look odd. But it is hard to tell exactly what you have done.

You appear to be intentionally compiling code with the F_CPU set incorrectly, which will generate incorrect delays.
Is that what you did? How exactly did you build the images?

That said, I would have expected to see the FPS results for #2 & #3 to be reversed, assuming there was no change in configuration
and the display is still functioning properly showing
a proper looking FPS display.

Are you sure the FPS values for #2 and #3 are not swapped?

In the case of #2, compiling with F_CPU set to 16mz but running the processor at 20mz
will cause delays to be shorter than #1 and all the rest of the
code should be a little bit faster as well. Even if the h/w polling
erases some of the shortened delays I would not expect #2 to be slower
than #1.

For #2 are you saying you used the *exact* code image from #1
on the same processor using the *exact* same glcd (not same model, but the very same one) with the *exact* same pin configuration (no recompile, no new download?) and the only change between #1 and #2 is simply replacing the crystal with a 20mz?

And for #3, same thing as above, with respect to #4
with no recompile and no new download no changes, simply swapping crystals?

If so, then I agree something odd is happening.

But I have seen some odd things in the past. For example,
sometimes if the timing is just right, running a bit slower can
actually yield faster results. This is because when the timing is
just right, the code will test a hardware flag and not have to
spin on it because it is already ready. In that case you save a
loop iteration and save a few clocks. But normally it is very
rare to see this. And in the case of jumping from 16mz to 20mz
I would think that the rest of the code would  make up for that.

Looking forward to getting more details on this.

--- bill


Clearly #1 and #2 is same HW, same FW, same GLCD, same configurations, pins, etc... everything. I just remove 16Mhz Xtal and replaced it with 20Mhz one and FPS drops down.

Same applies to #3 and #4... But FW is compiled for 20Mhz.
I am sure that my F_CPU is correct for the configurations. But if even I set it wrong, I use same FW on config #1&2 and #3&4....


Aug 21, 2010, 02:32 pm Last Edit: Aug 21, 2010, 02:34 pm by robinet_pl Reason: 1

Like Bill said - maybe answer is in display's speed and processor and LCD timing?
2) compiled for 16MHz but run on 20MHz - "slower" version is run faster and when processor is checking if LCD is ready - it's not ready and processor need to wait. More often it check and miss - more clocks it lose.
3) faster version run on slower clock - better hit ratio ;)

Do you have another LCD?



Aug 21, 2010, 03:30 pm Last Edit: Aug 21, 2010, 03:31 pm by mem Reason: 1
It may have something to do with the fact that when compiled for 16Mhz and running with a 20Mhz crystal, the timer that drives millis is overflowing 25% faster than the code expects. That means that there is less time to draw the frames, resulting  in a lower FPS than if the clock doing the timing was accurate.

Anyway, my reccomendation is to avoid setting it wrong ;)


Yes, I thought that and come to write here :)

If I compile code with 16Mhz, CPU counts 16M for a second...
But If I use 20 Mhz at this setup, its again count to 16M and it takes less than a second,  but cpu thinks it's a second. So it's actually it prints frame per %80 of second... So it's lower than expected.

Code 16Mhz      | Xtal 20Mhz      | fps 13.46
With basic math
13.46 * 20 / 16 = 16.825, It's really close to 20Mhz code with 20Mhz Xtal setup, 16.85 fps. :)

Go Up