I can't see how it can take 9 seconds for a simple select on an indexed column with only 600 000 rows in the db?!?!?
Oh, sorry, that query returns all rows right?
What is a must on all queries against any kind of amount of rows is a correct indexing and thought out "where" clauses. The good old SELECT * FROM x WHERE created > '2006-01-01' and < '2006-01-31' will kill the db very fast with those volumes! Esp if created happens to be anything else than an indexed unix timestamp...
The two largest tables in our systems are appr 6 million rows and the other is somewhere around 250-300 million rows per 3 months
I've posted this link before http://www.mysql.com/why-mysql/case-studies/mysql-hp-ongame-casestudy.pdf it's a case study on our db-system where I work.
It is indeed possible to query these amounts but it takes a lot of serious hardware and very good knowledge on how mysql interprets and executes the query and how the indexing engine works.
To get to those volumes you are going to have a LOT of writing operations in the db...
If you are contemplating storing these amounts you are talking about on a shared server... forget it.
If you have a dedicated server with loads of very fast disk, perhaps, that is if you can gain access to tune it for massive data volumes.
If you have a dedicated server just for mysql and a SAN for the data, yup...
The best way to set it up is actually two servers, one for writes, such as Create, Update and Delete, then you setup replication to a slave where you query the data. This eliminates a lot of the problems you get as tables are locked for writing while you try to read it. Also with that amount of data you are certainly talking avout a lot of statistical queries that will be murder on performance, that you can't run on a live server that is constantly updated.
Next is the setup of indexes in the tables... that you should consult a real DBA for, it's an art when you are talking about these volumes.
To get away with this I'd probably set it up very carefully, then measure how much data I'm getting per 24h, then based on that decide on how many days data I need instant access to. Then I'd setup a periodical dump of that data to a storagesolution that isn't performace sensitive, could be files or a separate db that you can query without concerns. But be prepared on running out of disk a lot sooner than you think.

/Dan
{cell 260} {Blog}
-{" Computer:
?As a Quantum Supercomputer I take advantage of Zeno?s paradox?
C: ?Imagine a photon that must travel from A to B. The photon travels half the distance to B. From its new location it travels half the new distance. And again, it travels half that new distance.?
C: ?It continually works to get to B but it never arrives.?
Human: ?So you keep getting closer to finishing your task but never actually do??
C: ?Hey, coders make a living doing that??
?}-
(Edited by DmS on 06-27-2006 18:54)