performance differences between itunes and nslu2

FireFly Media Server Firefly Media Server Forums Firefly Media Server General Discussion performance differences between itunes and nslu2

Viewing 9 posts - 11 through 19 (of 19 total)
  • Author
    Posts
  • #9179
    chaintong
    Participant

    Ron – glad you’re looking into the speed thing…

    I’m not sure how this helps but here are my observations:

    I have just come to the conclusion (by trial an error) that that the limit of my set up is some where around 20k songs with a wireless roku. Using itunes on a wireless laptop it does finally get the entire songs list but it takes over a minute. I think the ROKU just times out because I always get a “failed to load browse data” at around 20k. The itunes setup will go all the way to 35k (the total number of songs in my library), – if you wait long enough.

    Interestingly if I reboot everything (slug, router, Roku) and let the whole thing settle down (wait 10 minutes) the Roku will load the songs list when I select browse songs. If I then shout down and start the Roku up against, it won’t browse all the songs again. – Also the reboot of everything never enables me to browse albums or artists – that’s a bit weird?

    As far as the roku is concerned I think the problem was made worse when I started a concerted effort to update all my tags using media monkey (including put in cover art in the tag to stop all those messy files cluttering up my directories – would adding tag data in make thinks worse?

    I have added sqlite indexes to titles, albums and artists and this made no change to the ability to browse 25k songs.

    I can’t find the “correct order” flag to speed things up

    Will sqlite3 help here?

    Is there anything else worth trying? Else I’ll just cut my songs down to around 15k, and wait until then next fix.

    Later

    Tom

    #9180
    CCRDude
    Participant

    Covers shouldn’t matter, because browsing will take only the metadata necessary, from the central db, and not from each file.

    20k songs are no problem at all for my Roku. Are you using a nightly (with rsp) or 0.2.4?

    The “correct order” flag is in the web interface if you switch the config page to advanced view imho 🙂
    Config category “Databases”, last (sixth) line “Ordered Playlists”.

    #9181
    rpedde
    Participant

    @CCRDude wrote:

    With svn-1498 & sqlite3, accessing 20k songs took 1:42.
    With 0.2.4 & gdbm, accessing 20k songs took 0:28.

    Try it without playlists on sqlite3.

    I don’t think that’s all due to traversal. I get a full gdbm traversal in sub-second times. I think the big difference there is gdbm versus sqlite.

    Would have tried svn-1498 with gdbm, but sadly the new configure requires me to use either –enable-sqlite or –enable-sqlite3 and doesn’t accept just –enable-gdbm, and the config file doesn’t allow it either.

    The gdbm isn’t quite there yet.

    But this difference is even more than 1:3!

    Again, this isn’t all due to multi-passes, I think it might also be db. I want to get a gdbm backend in there — that will be apples to apples.

    — Ron

    #9182
    chaintong
    Participant

    CCR dude: are you using the roku wirelessly?

    I am running with Version svn-1498 – I understand that to be the latest nightly – I’m not sure about rsp – I assume that just comes with the latest nightly

    I found the Ordered Playlists option, but because I don’t actually use playlists it made not difference to my ability to browse albums, artists, or songs.

    Next idea (from Ron in another thread) is to move the tmp drive from flash to drive.

    #9183
    CCRDude
    Participant

    @chaintong: yes I am, and not the best signal strength either. The only times I ever have browsing problems is during those eternal waits when any Windows machine tries to access the server – during those 2 minutes, the Roku is unable to browse and fails.

    @ron: well, I’ll try to disable smart playlists when the machine isn’t used for some time and I find the sqlite terms to rename the playlists table to something else temporarily, so I don’t have to enter them all again afterwards 😉
    If that is the case, it seems iTunes is coded even worse than I could imagine 😀 The protocol at least allows to read playlists separately, but may be right, they’re there immediately in the client once opened, might be iTunes is reading their contents on connection directly as well.

    #9184
    rpedde
    Participant

    @CCRDude wrote:

    @ron: well, I’ll try to disable smart playlists when the machine isn’t used for some time and I find the sqlite terms to rename the playlists table to something else temporarily, so I don’t have to enter them all again afterwards 😉
    If that is the case, it seems iTunes is coded even worse than I could imagine 😀 The protocol at least allows to read playlists separately, but may be right, they’re there immediately in the client once opened, might be iTunes is reading their contents on connection directly as well.

    Right, it sucks *everything* down on the connect. Until I get a gdbm backed db, though, it will be hard to tell where the bottleneck is. Might be that indexed reads into the database is fast enough that I could build a small btree in memory just with sizes, and stream it all out on a pass through the tree. That way, I’m building it in memory, and get rid of a pass on the db.

    Dunno… might play with the memory map too, just to see how it shakes out. I just gotta finish up the vista crap first. Ugh.

    #9185
    CCRDude
    Participant

    Hmmmm… I get more confused by the day 😀

    I’ll repeat my previous results and add one more:

    With svn-1498 & sqlite3, accessing 20k songs & 10 smart playlists took 1:42.
    With svn-1498 & sqlite3, accessing 20k songs & no smart playlists took 0:47.
    With 0.2.4 & gdbm, accessing 20k songs with ??? took 0:28.

    I guess I’ll have to check with 0.2.4 again to make sure whether that was with playlists or not; looking at those times now, if 0.2.4 was without smart playlists, the sqlite3 loss may only be 50% instead of 350% 🙂
    In that case, the smart playlists would be at fault. I’ve been thinking on how this situation could be improved… do you make additional passes for those (probably)? In that case the file caching method would probably be tuned even finer… though I start to tend thinking that using the smart playlist criteria on all those songs ten times might also be the factor, which couldn’t be influenced really.

    Need to get used to using a Soundbridge in the office as well I gues… 😀

    #9186
    fizze
    Participant

    Well smart playlists are a pain in the performance-neck, deffo.

    They are really “smart” though. That is they are created in realtime for every query. TO buffer them would greatly improve performance. I rencelty played a lil with smart playlists aka “all except 2-7 dirs”, which came down awkward, performance-wise.

    A list with all the song IDs should suffice, shouldn’t it? 😉

    #9187
    CCRDude
    Participant

    Hmmm right…

    * another table (ouch *g*), simple songid-playlistid.
    * updating this table (add & remove) whenever a smart playlist is changed.
    * updating this table (add) when the scan finds new songs.

    Hmmm sorry… thinking too much in databases recently… just have a separate file named playlist.cache in the cache folder and put each song id as an unsigned long in there and thats it, even with 20k songs, that would not mean too much in memory while using it as a help in processing.

Viewing 9 posts - 11 through 19 (of 19 total)
  • The forum ‘General Discussion’ is closed to new topics and replies.