
Indexing shouldn't pose too much of a challenge if you tune your system for the quantity of RAM you're dealing with. PostgreSQL handles indexing reasonably, and caches queries for performance. If you need to add fields later with a flat file, you have a bigger chore. If you're going to have to edit data in its current location in the flatfile, you have an enormous chore. I'd suggest PostgreSQL, which is also free and generally an all-around great project(TM).įlat binary files (or even ASCII) should be a decent solution if you don't need to manipulate the data in place afterward or do complicated joins. MySQL is probably not what you want if you're dealing with data you need represented faithfully and with powerful indexing. (If it makes any difference at all, I use Linux) What is the best way to store such data in a scalable way (I might have even more fields as time goes by.)? is it true that flat binary files, on a by field/by interval basis, are the best solution given my requirements and budget? My machine has only 2GB of memory in it, so either way, the indices will not fit in memory. On the other hand, if I do add indices, the insertions are tremendously slow, so this does not help either.

I tried it, but queries are extremely slow if I don't add any indices to the table. In terms of budget, since I do not have the means to buy a huge server and something like SQL-SERVER to store the data, a friend recommended that I look into MySQL. such a query must be retrieved as fast as possible


Here are the requirements in terms of insertion/querying:

Overall, I have a total of 10 years of data for each stock/field/interval.Each field gets updated 390 times in a day (meaning, there are 390 intervals).For each stock, I should keep data for 1000 fields ("open", "high".I need to build a system to store the following data:
