FireFly Media Server › Firefly Media Server Forums › Firefly Media Server › Feature Requests › AudioScrobbler / last.fm support
- This topic has 53 replies, 18 voices, and was last updated 14 years, 8 months ago by pcace.
-
AuthorPosts
-
13/02/2008 at 9:18 AM #5254magnateParticipant
@barefoot wrote:
@magnate wrote:
Obviously it’s not as good as having mt-daapd output the details of each song into /var/spool/lastfm play by play, but it’s a reasonable stopgap. I don’t tend to listen to any song more than once in a day, so it works for me.
Thanks for the script. However, does it really work for you every time? It does not for me, as from time to time lastfmsubmitdaemon ignores the written files and does not submit the information. What is submitted and what not is fairly random.
I first suspected empty files, but that’s not the reason. No clue…
Oh dear, sorry to hear that it’s giving you trouble. It took me a long time to get it working right, but as posted it now works for me every time. I’ve checked, and so far it hasn’t failed to scrobble anything I’ve listened to, since it was finished.
What is in your lastfmsubmitd logs? (/var/log/lastfm/*)
What versions are you running (of mt-daapd, of python, etc.)?
What distro are you using?I wonder if we are using slightly different versions of lastfmsubmitd – I’m using the one packaged for Debian, which I think has been slightly tweaked from the original.
Sorry I don’t have any better ideas at the moment, but happy to try and help.
Cheers,
CC
13/02/2008 at 10:29 AM #5255barefootParticipant@magnate wrote:
What is in your lastfmsubmitd logs? (/var/log/lastfm/*)
What versions are you running (of mt-daapd, of python, etc.)?
What distro are you using?I wonder if we are using slightly different versions of lastfmsubmitd – I’m using the one packaged for Debian, which I think has been slightly tweaked from the original.
Sorry I don’t have any better ideas at the moment, but happy to try and help.
Thanks for the help. Actually, I was successful in working around the problem yesterday, even if I don’t know the exact reason for failure. What I now do is: At the end of the script, I sleep for 5 second and then collect all files in the spool directory (if still existent) to one single file in a separate place and copy it back to the spool directory. This file is recognized reliably (and if not, it is collected next time), all the others get erased. I don’t know why it works, but it does.
lastfmsubmitd recognizes the files it does not scrobble (if I erase them, it complains that they vanished and crashes). Maybe it works too fast and begins scanning the files before they are completely written – but then my workaroung should also fail. No clue.
I use lastfmsubmitd from sources on a slow MIPS architecture with SLUG binaries. python is actually 2.4, maybe should I upgrade to 2.5?
14/02/2008 at 8:00 AM #5256magnateParticipant@barefoot wrote:
@magnate wrote:
What is in your lastfmsubmitd logs? (/var/log/lastfm/*)
What versions are you running (of mt-daapd, of python, etc.)?
What distro are you using?I wonder if we are using slightly different versions of lastfmsubmitd – I’m using the one packaged for Debian, which I think has been slightly tweaked from the original.
Sorry I don’t have any better ideas at the moment, but happy to try and help.
Thanks for the help. Actually, I was successful in working around the problem yesterday, even if I don’t know the exact reason for failure. What I now do is: At the end of the script, I sleep for 5 second and then collect all files in the spool directory (if still existent) to one single file in a separate place and copy it back to the spool directory. This file is recognized reliably (and if not, it is collected next time), all the others get erased. I don’t know why it works, but it does.
lastfmsubmitd recognizes the files it does not scrobble (if I erase them, it complains that they vanished and crashes). Maybe it works too fast and begins scanning the files before they are completely written – but then my workaroung should also fail. No clue.
I use lastfmsubmitd from sources on a slow MIPS architecture with SLUG binaries. python is actually 2.4, maybe should I upgrade to 2.5?
Well, according to Debian, lastfmsubmitd is agnostic about which version of python you have, so it shouldn’t be necessary – but it can’t do any harm to upgrade. If you wanted to look into lastfmsubmitd itself (it’s actually only a couple hundred lines of python code), you could probably find where it’s waiting for files to appear in the spool dir, and lengthen the wait from every few seconds to every minute or so – that should solve the slow writing problem, if that’s the cause of the failure.
I remembered one other thing yesterday – the awk invocation in the script uses the strftime feature which is in gawk but not in mawk. I thought that might have been the problem, if you were running mawk, but it would have had a completely different solution from the one you found. Oh well. Well done on sorting it out,
CC
20/02/2008 at 3:28 AM #5257AnonymousInactiveI would love to see last.fm support for windows users. I love my soundbridge and I love firefly, but I really, really miss last.fm….
20/02/2008 at 4:53 PM #5258jtbseParticipant@oranges wrote:
I would love to see last.fm support for windows users. I love my soundbridge and I love firefly, but I really, really miss last.fm….
oranges…
Have you seen sbPopper?
http://www.monkeylicense.com/sbpopper
Runs in windows and “watches” your Soundbridge for what’s playing to Scrobble it. Disadvantage is you have to keep Windows and Last.FM running for it to work, but guessing you keep Win running anyway if you run Firefly on Windows.
21/02/2008 at 2:56 PM #5259AnonymousInactivecool…thanks I’ll check it out.
24/02/2008 at 5:06 PM #5260FrankZabbathGuestFirst of all thanks a lot to magnate and all the other contributors to this audioscrobbler solution. It seems to work very fine now on my NSLU2 (armeb) running unslung 6.10 with firefly svn-1586 and lastfmsubmitd-0.37 with python 2.4. So here’s my feedback on my working database query script.
It took me some time until I finally tried out another approach than the one suggested here:
@barefoot wrote:
Thanks for the help. Actually, I was successful in working around the problem yesterday, even if I don’t know the exact reason for failure. What I now do is: At the end of the script, I sleep for 5 second and then collect all files in the spool directory (if still existent) to one single file in a separate place and copy it back to the spool directory. This file is recognized reliably (and if not, it is collected next time), all the others get erased. I don’t know why it works, but it does.
So what I basically do is first writing the query result file somewhere outside the spool dir and then move it into the spool. That way lastfmsubmitd processes the file everytime (so far ;).
Additionally I made the script only query the database, if the database file has been changed since the last run. Furthermore a spool file is only written if a database query has been done, instead of always creating 0 byte files when the script runs.
I also had to fix the timezone offset by -1 h ($6-3600 seconds in the gawk statement) in the query result to report GMT+0 timestamps, as otherwise the times would be GMT+1 and thus shown in the future in my last.fm stats.So here’s my currently running script:
#!/bin/bash
# fetch newly played songs from fireflydb and write
# into lastfmsubmitd readable format
# config
SQLITE=sqlite
DATABASE=/opt/var/mt-daapd/songs.db
LASTFILE=/opt/var/mt-daapd/lastfmsubmit.date
DBLSFILE=/opt/var/mt-daapd/lastfmsubmit.ls
# get last run time
if [ -e $LASTFILE ]
then
. $LASTFILE
else
LASTRUN=0
fi
# get last database file date
if [ -e $DBLSFILE ]
then
. $DBLSFILE
else
DBLSRUN=
fi
# exit when database file unchanged
DBLSNOW=`ls -l $DATABASE`
if [ "$DBLSRUN" == "$DBLSNOW" ]
then
exit
fi
# log file date
echo "DBLSRUN="$DBLSNOW"" > $DBLSFILE
# query database
OUTFILE=$(mktemp /tmp/mt-daapd-XXXXXXXX)
$SQLITE $DATABASE 'SELECT artist,album,title,track,song_length,time_played FROM songs where time_played > '$LASTRUN' ORDER BY time_played ASC;' | gawk -F '|' '{ printf "---nartist: "%s"nalbum: "%s"ntitle: "%s"ntrack: %snlength: %dntime: !timestamp %sn",$1,$2,$3,$4,$5/1000,strftime("%Y-%m-%d %T",$6-3600) }' > $OUTFILE
mv $OUTFILE /var/spool/lastfm
# log query date
echo "LASTRUN="`date +%s` > $LASTFILE
# make lastfmsubmitd files readable
chmod 664 /var/spool/lastfm/*
Be sure to set the paths, the sqlite executable and time offset to your needs. This script needs bash and gawk available via ipkg.
I haven’t tried yet to change the shebang line from bash to sh. Might be better to do so.04/03/2008 at 6:20 PM #5261uncleremusParticipant@t0m wrote:
I’ve been testing this script for two weeks now (cronjob running every 5 minutes) – its working fine for me.
/t0mWorks fine for me also, very nice job, thanks!
One detail, even if I just scroll past a song, listen for a second. It will scrobble it on last.fm, is there a possibility that it could behave like amarok, iTunes, etc. who has some minor qualifications for a “listen”. Like a requirement that half the song should at least have been played, or similar?
(Another thing: The usage message indicates that “-d” is the option for specifying db_file, but in the perl code it looks like it is expecting this as the “-r” option.)
08/03/2008 at 8:28 PM #5262magnateParticipantThanks to FrankZabbath for a greatly improved lastfmsubmitd script. It took me hours to figure out why it wasn’t working – I kept getting the error message
: no such file or directory
… turned out I had copied the script over in DOS text format, so it was interpreting the first line as
#!/bin/bash ^M
and was looking for a file called ^M.
Ho hum. Sorted now, and running happily.
Thanks again,
CC
21/09/2008 at 8:34 PM #5263thorstenhirschParticipantAfter upgrading Python 2.3 to 2.5.2 on my Linkstation, I’m also able to use lastfmsubmitd with FrankZabbath’s script.
If anyone else has problems with 0 byte files in the spool directory, this might be caused by the following line:
SQLITE=sqlite
Well, in my case it’s sqlite3, but that’s not the point. The cause was, that sqlite(3) is not given in an absolute path here, so there is no output on STDOUT later in the script when sqlite(3) is being called. To fix the problem, just give the full sqlite(3) path:
SQLITE=/usr/local/bin/sqlite3
You won’t have the problem when $PATH is defined correctly during creation of the bash process, but that’s not the case on my Linkstation.
-
AuthorPosts
- The forum ‘Feature Requests’ is closed to new topics and replies.