venta: (Default)
[personal profile] venta
It's Friday! It's about three o'clock! It's time to Boogie At Your Desk!

Friday afternoons need a little something. I think they need a Top Tune. Something to make you shuffle in your seat and, if possible, Boogie At Your Desk. I'll be endeavouring to fill this gap some Fridays this year.

I'm not claiming that any track provided to enable At-Desk Boogying is one of the world's best or most profound pieces of music. It will, however, be one of the tunes which make me smile, and which have at some stage made me surreptitiously Boogie At My Desk.

Desks are not compulsory, of course. Feel free to boogie through your office, in your bedroom, round your lab, across your classroom, on the train - wherever you find yourself on a Friday afternoon.

If you like the track, go out and buy the album it belongs to - I'll try and recommend a suitable CD to purchase for any BAYD track.

This link will expire when I leave work this evening. Unless [livejournal.com profile] broadmeadow is willing to leave it up for a bit longer, as some people tell me they like a sneaky post-work boogie of a Friday evening.

Today you were invited to Boogie At Your Desk to:

Dropkick Murphys - Heroes From Our Past


Nearly a year ago, I went to see the Dropkick Murhpys in London. Despite them being a little patchy live, it still remains (I think) the most enthusiastic gig I've ever been to.

Ideal for a bit of at-desk boogying, I think.

And if nothing else, you can all have a good giggle at the idea of someone singing, in a punk-stylee, "When we think back to our ancestors respectfully we hark...". Woah, yeah, man, it's anarchy! Smash the system, but with due respect to the ancestors.

I only own one Dropkick Murphys CD, which is Sing Loud, Sing Proud!. And if you want seven slightly drunken blokes (plus bagpipes) cheerfully yelling their way through the occasional traditional (mostly Irish) folk song, and some original compositions (including the remarkably good The New American Way), then you want this CD.

I certainly wouldn't unconditionally recommend this band, CD, or indeed track. It's easy to have too much of them, and I'm sure some people will loathe them on hearing. But for a bit of simple, singalong fun, they're a damn good bet. I prescribe buying an album, then occasionally plonking a couple of tracks onto a late-night driving compilation.

Date: 2005-02-25 03:32 pm (UTC)
From: [identity profile] onebyone.livejournal.com
if you're sure your server doesn't mind.

I found about about a quite awesome thing yesterday, which is that if you convert the URL by adding ".nyud.net:8090" after the domain, like so:

http://www.broadmeadow.plus.com.nyud.net:8090/bayd.mp3

(If the original server isn't on port 80 but for instance 8080, you'd have to do it like so: http://www.broadmeadow.plus.com.8080.nyud.net:8090/bayd.mp3)

Then a thing called CoralCDN will serve the file to you from a massively distributed web cache, so the origin server doesn't get hit so often.

From a few random tests I've done, it's somewhat slower when the origin server isn't overloaded, even apart from the first ever hit, which obviously will be. It should be good though if your upload bandwidth is either metered or is currently saturated.

It also kind of assumes that the hits will come in quick succession, since the machines doing the caching have limited space to work with, and drop stuff out of their cache in part according to what was least recently used.

And better yet!

Date: 2005-02-25 04:06 pm (UTC)
From: [identity profile] wimble.livejournal.com
They actually seem to be encouraging people to do this, rather than it being an undesired piggyback on somebody else's capacity.

From my home machine, I normally get a throughput of 15k/s. Having just pointed it at my playlist manager (approximately a 1.8 Meg download), I got 360k/s on the second transfer!

I'm on a university line, and seem to be getting about a 4.5 Megabit connection to their cache. I imagine many other would see a lower rate, as it'll be capped by their employers bandwidth. But it does take a huge load off the end server.

(Note: it rewrites URLs on static pages, but not on dynamic ones.)

Re: And better yet!

Date: 2005-02-25 04:18 pm (UTC)
From: [identity profile] onebyone.livejournal.com

They actually seem to be encouraging people to do this

Definitely. I think that's partly because they're nice, partly because they're funded by US research money, and partly because this is still a somewhat experimental phase of the project. Experimental in that they've got the software working as they want it, but they only have about 500 nodes because there's no way to accept untrusted cache nodes into the network without support from the origin server and the client.

I got 360k/s on the second transfer

Nice. To be fair, I was trying it on "big sites" rather than sites likely to be limited by some poor schmuck's ADSL line.

it rewrites URLs

Not according to the FAQ. Are you sure you aren't just looking at pages (or playlists) containing relative URLs?

Re: And better yet!

Date: 2005-02-25 04:21 pm (UTC)
From: [identity profile] wimble.livejournal.com
Are you sure you aren't just looking at pages (or playlists) containing relative URLs?

D'oh! Good point. Well made.

<goes off to smack self around the head with a "view source" button>

Re: And better yet!

Date: 2005-02-25 04:46 pm (UTC)
From: [identity profile] broadmeadow.livejournal.com
This also happens with ISP et-al own caches, of course - so often you'd see this effect anyway.

Re: And better yet!

Date: 2005-02-25 04:59 pm (UTC)
From: [identity profile] wimble.livejournal.com
Depending on their expectations: my server is my home PC, so NTL don't provide a machine to cache my content (it being in the opposite direction to their normal traffic).

And [livejournal.com profile] venta's hosting seems to apply a 5Kb/s limit, shared amongst all the clients. So a 75 fold upgrade, with no client limits "might not be an entirely unwelcome" thing (which will require [livejournal.com profile] venta to start downloading the file her self at about 2.45, in order to get it into the cache...

Re: And better yet!

Date: 2005-02-25 05:12 pm (UTC)
From: [identity profile] broadmeadow.livejournal.com
I was thinking of the case where people access your server and _their_ ISP caches the content. It should be the case, for example, that no many how many people here at Tao accessed the file it was only downloaded from PlusNet to Tao once.

For this same reason, getting a faster throughput when you access a slow server on the second transfer (which is what I was referring to, but didn't make clear) might be expected anyway: on the first transfer the data is (slowly) downloaded from the host site and cached by your ISP; on the second you are simply retrieving it from your ISP's cache.

Re: And better yet!

Date: 2005-02-25 06:17 pm (UTC)
From: [identity profile] wimble.livejournal.com
Ah. Of course: that didn't occur to me. Brookes doesn't run a web cache, on the grounds that, for the most part, 15000 users hitting the internet (mostly) at random doesn't make it economically viable. So my changes weren't due to a local cache.

The CoralCDN cache is still worthwhile, as it'll serve N ISP proxies, rather than each ISP having to fetch their own copy from the source server.

Re: And better yet!

Date: 2005-02-25 06:29 pm (UTC)
From: [identity profile] onebyone.livejournal.com
15000 users hitting the internet (mostly) at random doesn't make it economically viable

Ooh, now you've got me thinking about viable strategies for a smallish cheapish cache in such an environment (e.g, only cache once the frequency of hits to a particular URL passes a certain threshold across a certain time, discard cached objects according to the product of their size and frequency of hits, and some kind of load monitoring so that you can just bypass the cache when its load tops out).

Re: And better yet!

Date: 2005-02-25 06:38 pm (UTC)
From: [identity profile] wimble.livejournal.com
Does that really work? If you only cache once the frequency of hits crosses a threshold, you run the risk that you're going to put it in the cache at the moment when nobody else is going to be interested (and you've just missed a number of interested users).

'course, I haven't thought about it any further than that, so I dunno...

Re: And better yet!

Date: 2005-02-25 07:03 pm (UTC)
From: [identity profile] onebyone.livejournal.com
Yes, you're going to run that risk, but you can do a likelihood estimate (either statically when configuring the cache, or dynamically whenever the cache feels like it) which looks at whether we reckon there's going to be another hit by considering what usually happens when URLs get hit particular numbers of times.

For example, if it's incredibly common for pages (or pages on a particular site) to be hit exactly 27 times per day, then don't set the threshold (for that site) to "27 in one day". 12 is looking good, though.

If the distribution is quite smooth, then it probably doesn't matter that sometimes you'll cache at the wrong moment, because usually you won't. Just pick the value that would have given the most hits on average based on your past traffic - the good thing about having 15000 people acting randomly is the Central Limit Theorem.

I'm also guessing that the cache operation itself is quite cheap except for the opportunity cost it incurs by pushing something else out. So if you get even one hit on an object during its life in the cache (i.e. before it expires or is pushed out), then you probably win. The question then is how to get the number of wins above the point where it's worth the price of the server.

I'm sure all these issues are quite thoroughly studied, though, because the trivial cache strategy "save everything and discard the oldest" has its own problems, namely that large rare downloads will keep emptying your entire cache but then not getting hit.

Date: 2005-02-25 04:14 pm (UTC)
From: [identity profile] venta.livejournal.com
Ooh, cool. I'll have a look at that. Thanks.

Date: 2005-02-25 04:41 pm (UTC)
From: [identity profile] broadmeadow.livejournal.com
Ooo, will have to investigate that.

In this instance, though, there is no problem. It's on www.broadmeadow.plus.com, which is hosted by my ISP and more than capable of coping with the relatively small number of hits it will get.

I host broadmeadow.plus.com (and others) myself using an ADSL line, and that would have been somewhat swamped by more than one request for the mp3 file at a time!

Yes, www.broadmeadow.plus.com and broadmeadow.plus.com being different is not ideal! PlusNet give you a fixed IP address and <user>.plus.com resolves to it so that reverse DNS lookups work. They also give you web space on their servers accessible at www.<user>.plus.com, which is all that most users will ever use. Ideally they'd enable me to set www.broadmeadow.plus.com to the same as broadmeadow.plus.com, and provide www2.broadmeadow.plus.com for their server, but they don't.

Profile

venta: (Default)
venta

December 2025

S M T W T F S
 123456
78910111213
14151617181920
212223 24252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 27th, 2025 04:37 am
Powered by Dreamwidth Studios