While it has become rather easy to score stuff off the net thanks to p2p piracy, there is another old skool (sic) method of getting the goodies that many people aren't aware of - via a search engine.
This relies on the fact that lotsa people (fucking idiots if you ask me) dump a shit load of stuff on their web servers to share with their friends, for a temporary back-up or whatever, and totally forget about it. They don't realise that web crawlers will index these pages and essentially make them public in no time.
Anyways, since these files are just uploaded onto the server and not embedded into a webpage (this is usually the default setting of the web server), they almost always have a common format which looks something like the format on this example page. Note the key headers for each column: Name, Last modified, Size and Description. Similarly, note the term Index of at the top of the page. These terms are always present in the pages we want to search.
So, now that we know what to look for, let's head to google:
Enter the following in the query box:
"index of" "last modified" name size description "parent directory"
These are the base terms. Now, add in terms for files that you are searching for.. So, if you are interested in looking for Led Zep mp3s, the complete query will look like this:
"index of" "last modified" name size description "parent directory" mp3 led zep
The flyingbanana site linked above (at the time of writing) has a Led Zep album, an SRV album, two Gn'R albums, Mark Knopfler and more ...
This technique (if I can call it that) is not really practical or useful nowadays. A lot of "MP3 sites" and other scammers create spurious pages just to attract such searches. Once you gain in "experience", you'll develop a nose for these false positives. But nevertheless, it's all rather easy isn't it?
If you want to take a lesson from this, then remember that if you do have to upload stuff to your webserver (for whatever reason), make sure that you password protect the directory (.htaccess for Apache). Also, do not rely on the robots.txt file to protect these files.
Also. the same method can be used for other more nefarious purposes, e.g. searching for tell-tale error messages that indicate security holes in your web-server / software and so on and is something of a well-known practice.