Reply To: large databases

#3901
rpedde
Participant

The bottom line is that you are going to have to iterate over all the files in your library (twice) every time someone requests a song. You’ll have to build a map of path to id, and you’ll have to synthesize tag info from paths and send that much data as a stream of dmap blocks back to iTunes, or the soundbridge, or whatever.

You can do that two ways — you can run through the filesystem and do that on a request, or you can do the memory and processor intensive part of that and save it to a database (or some other cache) and then just send that when you need to. What I’m saying is that the scanning trade-off is what makes something as underperforming as the slug even work as a music server.

As far as the slug goes, there really aren’t any trade-offs possible. You can’t trade disk i/o for memory, or memory for cpu, because the disk i/o, the memory, and the cpu on the slug all suck. 🙂

It may be that large libraries just aren’t feasible on the slug. Dunno.

That said, I think performance on it can be improved by indexing the browse fields, but I dunno — anyone I’ve asked that has a database that big hasn’t gotten back to me with the results of that experiment.

You willing to try messing with it in an effort to improve performance?