Partial enterprise database support

Mysql, Microsoft SQL, Sybase, Sqlite3, NoSQL-MongoDB, PostgreSQL, DB in Memory (memcached).

http://www.ibuyopenwrt.com/index.php/datalog-db

To do list: Oracle, IBM DB2

sonnyyu:
Profile C and Sqlite3

C speed is 3.59 ms

Profile Lua and Sqlite3

Lua speed is 4.69 ms

Profile Php and Sqlite3

Php speed is 4.35 ms

Profile Python and Sqlite3

Python speed is 4.43 ms

The C speed is 3.59 ms, 250 requests per second. It is slowwww.

NoSQL DB (Redis) to rescue

NoSQL means Big data, Distributed db, Scalability, Flexibility.

But one thing over looked is speed, some of them offer lightning speed.

root@ubuntu:/etc/redis# redis-benchmark -q -n 1000 -c 10 -P 5
...
SET: 333333.34 requests per second
...

333,333.34 requests per second

Use Redis at Yun/Yun Shield

opkg update
opkg install python-openssl #adds ssl support to python
opkg install distribute #it contains the easy_install command line tool (this can take some time)
easy_install redis
nano testredis.py
#!/usr/bin/python
# -*- coding: utf-8 -*-
import redis
r = redis.StrictRedis(host='192.168.0.210', port=6379, db=0)
r.set('foo', 'bar')
value = r.get('foo')
print(value)

"192.168.0.210" is Redis server box IP address.

chmod 755 testredis.py
root@Arduino:~# ./testredis.py
bar

Am I correct in thinking that the purpose of this Thread is to recommed the use of REDIS as a faster alternative to SQLITE3 ?

It is not obvious from your title, nor from your two Posts - without reading the second one carefully.

If so I will read a bit more about Redis - it may be interesting. Is it is as easy to use as SQlite3 ?

...R

NoSQL - MongoDB

Robin2:
Am I correct in thinking that the purpose of this Thread is to recommed the use of REDIS as a faster alternative to SQLITE3 ?

It is not obvious from your title, nor from your two Posts - without reading the second one carefully.

If so I will read a bit more about Redis - it may be interesting. Is it is as easy to use as SQlite3 ?

...R

RDBMS:

A relational database management system (RDBMS) is a program that lets you create, update, and administer a relational database. Most commercial RDBMS's use the Structured Query Language (SQL).

Mysql, Microsoft SQL, Sybase, Sqlite3, PostgreSQL are RDBMS.

Sqlite3 is one of fast one among them.

NoSQL:

Redis, MongoDB,...

Sorry, @Sonnyyu, but I can't figure what message you are trying to convey here.

I thought you are trying to identify databases that are faster than SQLite3, but you have not commented on that question.

...R

Robin2:
I thought you are trying to identify databases that are faster than SQLite3, but you have not commented on that question.

sonnyyu said this:

sonnyyu:
Profile C and Sqlite3

C speed is 3.59 ms

Profile Lua and Sqlite3

Lua speed is 4.69 ms

Profile Php and Sqlite3

Php speed is 4.35 ms

Profile Python and Sqlite3

Python speed is 4.43 ms

But he does not say on which computer this was run. Perhaps it was on a Yun?

Then he says this:

sonnyyu:

root@ubuntu:/etc/redis# redis-benchmark -q -n 1000 -c 10 -P 5

...
SET: 333333.34 requests per second
...




**333,333.34 requests per second**

But this is clearly not being run on a Yun (note ubuntu in the prompt.) So while the speed is impressive, no direct comparison can be made between the two results unless they are run under the same conditions: the same code doing the same types of updates, on the same computer, using the same database storage medium. Results can be seriously skewed by running one on a slow computer, doing large updates with poorly organized code, writing to a slow SD card; and by running the other on a fast computer, doing very small updates using a highly optimized benchmark utility, writing to a fast RAM database in /tmp.

Given the large difference between the two results, the odds are that redis will be faster, but we cannot guess how much faster without realistic comparisons.

Given that the DBMS he mentions doesn't use SQL, I'll bet that a large part of the speed increase is due to eliminating the need to parse the SQL request at run time. If you redesign your code to not use SQL and essentially parse the request before writing the explicit update code, of course it will be faster. The downside is the need to use an API specific to the DBMS, which means you could likely have compatibility issues trying to move to another DBMS, or extra work if your system already makes use of SQL. While SQL may incur some runtime overhead, the benefit is compatibility between a large range of platforms and database back ends.

Having had a very quick look at how Redis is used I agree with @Shapeshifter that a carefully organized like-for-like test would be essential before conclusions can be reached.

There is also the question of the tradeoff between programmer time (SQL might be more productive) and CPU time - a second CPU is not expensive.

However it is interesting to have the alternative brought to our notice.

...R

Robin2:
There is also the question of the tradeoff between programmer time (SQL might be more productive) and CPU time - a second CPU is not expensive.

That can be a complex trade-off to analyze. For a hobbyist, it's much simpler, since programmer time is generally free, and the number of units built is generally low (most often it's 1.)

But looking at it from a commercial engineering term, the programmer time is a non-recurring expense (NRE) while the additional processor is a recurring expense on each unit built. Engineering time is usually quite expensive, but being NRE, it is only paid once. However, the cost per unit can quickly add up in volume production: if only a few units are built, the cost per unit doesn't have much impact on the total profit, and the NRE becomes the major expense. But for a high volume product, even a few cents per unit can make a big difference in the profit margin, and the cost per unit can become the major expense, sometimes dwarfing the engineering expenses. For example, adding an additional processor at a cost of $2 per unit, and making 100,000 units, that works out to $200,000 extra cost, which comes right off the bottom line as lost profit. You can pay for a lot of engineering time with that money!

It probably doesn't make sense to try and translate an existing system from SQL to noSQL, as the changes are likely to be extensive. But when starting a new project, it might make very good sense to consider it, especially when speed is a concern. But many times, processing speed is not the major issue issue, so it makes sense to use whatever tool results in the shorter development time. For example, in lower data rate systems, does it make sense to spend a lot of effort speeding up the database access so that an operation takes 2% of the processing time, and the processor spends 98% of the time waiting for the next operation? Or do you save engineering time, and end up spending 75% of the time updating the database and 25% of the time waiting for the next operation? After all, the time to update the database doesn't really matter until the data rate increases to the point where the update time becomes longer than the time when the next update needs to occur.

Still, this looks like an interesting option when high speed is necessary. It's good to know that this is available to the Yun. It would be very interesting to have a realistic speed comparison between the two options.

ShapeShifter:
But looking at it from a commercial engineering term, the programmer time is a non-recurring expense (NRE) while the additional processor is a recurring expense on each unit built. Engineering time is usually quite expensive, but being NRE, it is only paid once. However, the cost per unit can quickly add up in volume production:

I suspect most cases where a very high performance database is required will be web servers and there will likely be several hardware systems anyway. Even if there were not, the total number required would be measured in 5s or 10s rather than 100,000s (unless you are Google, perhaps).

...R

Robin2:
I suspect most cases where a very high performance database is required will be web servers

I suppose you are right. I was considering small consumer level products (a typical use of embedded systems) but they are not as likely to need a high performance DB.

ShapeShifter:
...
But this is clearly not being run on a Yun (note ubuntu in the prompt.) So while the speed is impressive, no direct comparison can be made between the two results unless they are run under the same conditions:
...

Apples-to-apples comparison:

If you need speed just to fetch a data for a given combination or key, Redis is a solution that you need to look at. MySQL can no way compare to Redis and Memcache.

Upgrade datastore to memcached at Yun

Memcached is a general-purpose distributed memory caching system. It could delivery 10~20 time speed boost than Mysql.

@sonnyyu, I have looked briefly at the link you posted in Reply #11 but I can't see where the code that was used is available for download.

...R

SIDE NOTE:

Just started offer free service. My guess is they want to get into the IOT (Internet of Things) Business.

Jesse

Robin2:
@sonnyyu, I have looked briefly at the link you posted in Reply #11 but I can't see where the code that was used is available for download.

...R

From article:

I’ve attached the source code used to test, if anybody has any doubts, questions feel free to ask

But I can't see link!?

Redis: 333,333.34 requests per second

It does not stay still at that speed. since most NoSQL servers support horizontal scale out very well.

Get 3,333,333.4 requests per second, put 10 nodes at Redis Cluster.

sonnyyu:
But I can't see link!?

I noticed that too. It makes me doubt the results.

...R