[thelist] a question on MYSQL LIMIT clause - large tables.

VOLKAN ÖZÇELİK volkan.ozcelik at gmail.com
Wed Feb 15 08:24:19 CST 2006


Hi list,

I am more of a MS-SQL guy and I have not worked with paginating large
tables in MySQL (until now).

My question is which approach is better in terms of response time:

SELECT columns From tableName WHERE id > 9000 LIMIT 30
(--assuming an autoincrement id here ... I want to extract 30 rows and
missing one or two records is not that important. 9000 is the last ID
of the previous pagination. I don't care whether there are 8956 rows
before or 9013 rows before that row--)

OR

SELECT columns From tableName WHERE LIMIT 30  OFFSET 9000

My logic says that using LIMIT along with a limitation on the primary
(autoincrement) key should respond faster (the first query) and the
second query will respond somewhat slower. And the slowness may
increase as the table size increases.

However I'm not a DB expert and I wanted to ask the opinions of the
gurus out there.

- Which is the most efficient and effective way?
- Are there any other ways that may prove to be faster (I don't know,
creating a temporary table, creating a summary table of some sort,
anything) ?

I googled around but I was not able to find a satisfactory explanation
on the behind-the-scene workings of LIMIT. I assume it is DAO or
driver-specific. Is it okay to fully trust it. Or will I be better off
to offset my results on some primary key I trust?

Other details:

My table is of type MyIsam, there are frequent inserts and selects on
it. However there are not very much updates or deletes (that's why
I've chosen MyIsam for the table)

Currently the load on the database is not that much. However the table
possibly will grow a lot and the query load on the table will increase
considerably.


TIA,
--
Volkan Ozcelik
+>Yep! I'm blogging! : http://www.volkanozcelik.com/volkanozcelik/blog/
+> My projects/studies/trials/errors : http://www.sarmal.com/



More information about the thelist mailing list