Look at this picture, and try to guess how many of these cartridges are genuine:
Hint: the correct answer is “not a damn one”.
Look at this picture, and try to guess how many of these cartridges are genuine:
Hint: the correct answer is “not a damn one”.
Officially the point of this project was for the school to have something with which to replace ClusterKnoppix for their Besturingssystemen II class, but really I just wanted to have something nicer to make use of my ever-growing pile of old computers, which is why I finished it. The README explains. (HPC there stands for “high-performance computing” rather than “Hasty Pudding cipher”.)
What I need now is people to test it, ideally by building images and then trying them.
If you’d rather not put that kind of effort in, I’ve also pre-built an image (291 MB). It doesn’t come with X to save space, but it does come with (what else?) this Open MPI tripcode finder I wrote a while ago. It’s not particularly fast, but it reports its progress if you poke it with SIGUSR1 (as in
pkill -USR1 tripfind).
The unprivileged user is called
gjs, and the password for both him and root is
t. I’ve also included the MPI-patched JtR tarball in the home directory for you to build, if that’s less pointless.
Feedback appreciated, even if you don’t find any problems. If you find this project useful at all, or if you have any suggestions, I’d like to hear about it.
If you were a USB/PXE-bootable Linux distro for HPC, what kind of features would you want to have?
My social and academic environments aren’t exactly intellectually stimulating, so I get most of the programming problems I fill my days with—and of which the ones that are the most fun to talk about end up here—from books I read. Since I’ve already read every interesting sciencey non-fiction book available in Leuven, I’ve mostly been reading fiction lately, which doesn’t exactly inspire interesting algorithms, which is why I haven’t been bloggering as much.
In an effort not to let my programming skills get too rusty, I decided to write a thing that validates and parses ISBNs, extracting the publisher information and other things that are supposed to be in ISBNs. This turned out to be annoyingly non-trivial, so instead I’m just going to write about the numbers themselves.
As you probably know, ISBNs are a book numbering scheme standardised by ISO in 1970 (as ISO 2108), based on an earlier 9-digit scheme (SBN) used in the UK. It had ten digits until recently (January 2007), when it was expanded to 13. I assumed the expansion was because they were running out of numbers (which they were), but I also noticed every 13-digit ISBN started with 978, which was odd.
Old ten-digit ISBNs consist of a group identifier, which mostly identifies the language the work is in and is of variable length (it’s a prefix code,0 to avoid ambiguity; the 9-digit SBNs ISBN is based on didn’t have a group identifier, but prepending a 0 to them (one of the codes for English-language works) turns them into valid ISBNs), followed by a publisher code (again of variable length), followed by an item identifier, followed by a single check digit, used to make sure the other numbers were entered properly.1
New thirteen-digit ISBNs are basically the same thing with 978 prepended, and the check digit is calculated differently.
So hey, this doesn’t expand the number space. What’s the deal?
The deal turns out to be EAN, or European Article Numbers.
EANs are similar to North-American UPCs, with which they are compatible. It’s a barcoding technology intended to help track items in stores. UPC numbers are twelve digits long, and EANs thirteen.2
EANs start with a two- or three-digit GS1 prefix, which is basically a country code. Somewhere along the way someone realised that books are things that are sold too, and books have ISBNs, and let’s not waste a lot of disk space storing two numbers when one will do, so the GS1 prefix 978 was created, for Bookland, the magical land where all books are printed.
Because someone had the foresight to realise ISBN would run out of numbers eventually, they also reserved 979, and since the last digit of an EAN is also a checksum digit, people didn’t want to maintain two different methods of computing checksums, and the 13-digit ISBN was created. All of the old ISBNs map to new ones seamlessly, and new ones will mostly continue to be allocated in area 978 until that’s full, which is why 978 numbers are still by far the most common ISBN-13s.3
The term Bookland is now considered deprecated because people are boring twats and GS1 prefixes stopped being country codes and started being organisation codes, and 978 and 979 are registered to the International ISBN Agency, but it’s a cute bit of trivia.
Anyway, because I don’t want this post to be entirely worthless, here‘s a tiny script that takes a 9-digit SBN or 10-digit ISBN as input and produces the new 13-digit equivalent.
(Incidentally, that image is the ISBN for Karl Popper’s Logik der Forschung. It should not be taken as an endorsement of that tedious asshole’s work, but rather as laziness on my part, because it’s the first picture in the Wikipedia article on ISBN.)
1 Wikipedia claims it’s a modulo 11 affair, with X substituting for 10, but I don’t think I’ve ever actually seen X as a check digit. I’ll admit I haven’t been paying a lot of attention, though.
2 EAN-13, at least, which is the most common. There are others, but I’ve never seen them used. Apparently EAN-8 is common on cigarettes.
3 Something analogous happened with periodicals and their ISSN, with Unique Country Code 977, but that story is a bit more complicated because ISSNs are only eight digits long.
One thing that continues to annoy me whenever my internets get into a discussion about race is that invariably, very nearly everyone gets it wrong. The most recent example of this is, of course, when some Stormfront morons declared war on Pharyngula.
On the one hand you have the common racists, which are wrong for obvious and uninteresting reasons, but on the other you have the “enlightened” people who claim race is entirely a social construct, or at least of no significance whatsoever. They’re wrong too.
The following is an excerpt from Richard Dawkins’ The Ancestor’s Tale, which I finished a few weeks ago. It may be the clearest explanation I’ve seen so far.
It is genuinely true that, if you measure the total variation in the human species and then partition it into a between-race component and a within-race component, the between-race component is a very small fraction of the total. Most of the variation among humans can be found within races as well as between them. Only a small admixture of extra variation distinguishes races from each other. That is all correct. What is not correct is the inference that race is therefore a meaningless concept. This point has been clearly made by the distinguished Cambridge geneticist A. W. F. Edwards in the recent paper called ‘Human genetic diversity: Lewontin’s fallacy’. R. C. Lewontin is an equally distinguished Cambridge (Mass.) geneticist, known for the strength of his political convictions and his weakness for dragging them into science at every possible opportunity. Lewontin’s view of race has become near-universal orthodoxy in scientific circles. He wrote, in a famous paper of 1972:
It is clear that our perception of relatively large differences between human races and subgroups, as compared to the variation within these groups, is indeed a biased perception and that, based on randomly chosen genetic differences, human races and populations are remarkably similar to each other, with the largest part by far of human variation being accounted for by the differences between individuals.
This is, of course, exactly the point I accepted above, not surprisingly since what I wrote was largely based on Lewontin. But see how Lewontin goes on:
Human racial classification is of no social value and is positively destructive of social and human relations. Since such racial classification is now seen to be of virtually no genetic or taxonomic significance either, no justification can be offered for its continuance.
We can happily agree that human racial classification is of no social value and is positively destructive of social and human relations. That is one reason why I object to ticking boxes in forms and why I object to positive discrimination in job selection. But that doesn’t mean that race is of ‘virtually no genetic or taxonomic significance’. This is Edward’s point, and he reasons as follows. However small the racial partition of the total variation may be, if such racial characteristics as there are are highly correlated with other racial characteristics, they are by definition informative, and therefore of taxonomic significance.
It’s not surprising that Lewontin’s1 views are most popular in the US, where casual racism is so common many smart people are so eager to dissociate themselves from it they swing too far in the other direction.
Dawkins then goes on to say that if we have a person and we are told about his sex, we immediately know more about the shape of his genitals, though not with absolute certainty. That is to say, our uncertainty about some of his attributes is reduced. Similarly, if we are told this person is black, our uncertainty about a number of his attributes, such as (but not exclusively) the color of his skin, is reduced as well, so it’s intuitively obvious that race cannot be exclusively a social construct.
The whole thing is worth reading, though the book as a whole is not his best. If you’re going to buy it, buy the hardcover version. It’s expensive, but the book relies on pictures too much for the paperback to be very useful.
Incidentally, contrary to what aforementioned Stormfront morons claim, there is no conclusive causative link between race and IQ. It’s true that blacks on average have a lower IQ than whites in the US, but that difference disappears once you adjust for class (the lower classes tend to have lower IQs than the upper classes, of course, given the strong correlation between IQ and education levels), and the fact that blacks on average tend to be lower class than whites seems to be more of a result of discrimination based on racism than it is of anything inherent in blacks.2
Either way, this whole discussion makes me tired. Talking to either side in it is like talking to a brick wall.
1 It should also not be surprising that Lewontin is an erstwhile compatriot of Gould’s, and a longtime opponent of the straw-man “genetic determinism” of evolutionary psychology.
Did you know Denmark annexed Germany, the Netherlands, Belgium, Luxembourg, Switzerland, most of Austria (Tyrol and Vorarlberg resisted the foreign invaders), Hungary, and Slovenia, thereby restoring much of the Holy Roman Empire to its former glory?
Because Google did:
As for Chrome itself, it looks promising (more so than most Google projects, at least), though obviously it needs a Linux version. And, I suppose, a FreeBSD version.
The source is available, and a horrific mess. A simple browser shouldn’t have two gigabytes of junk cluttering up its repository (516 DLLs and 218 Windows executables, for the record), or provide its own copy of Cygwin just to have (I’m assuming) a C compiler.
As soon as they release a Linux version, I’ll probably switch away from Iceweasel, though as long as there are no equivalents for TorButton and the 4chon extension, I’ll still keep it around.
(If you still don’t know what this is about, this will help.)
Edit: And even with all that junk, it still doesn’t compile. Fuck it, I’m not troubleshooting that.
This is a few weeks old at this point, but people won’t shut up about it.
The Mojave Experiment is a marketing stunt by Microsoft that involves taking people off the street and presenting them with an OS called “Mojave” and asking for their opinion on it. Afterwards, it is revealed that this OS is actually Windows Vista.
Anyone with half a brain can tell you why this is a bullshit way of doing things, but unfortunately Microsoft fanboys (which do exist, for some reason) don’t tend to be in possession of even that much.
The obvious problem is that installing Vista on powerful hardware and putting people in front of it for five minutes isn’t the same thing as making people pay for that hardware (and for the OS), and then having them use it for a few weeks. What’s being evaluated here is the at-a-glance flashy wank, not the actual OS.
Yes, Vista is rather pretty, given the expensive hardware required to run it. We already knew that.
The test subjects weren’t, however, given a chance to see how it interacts with hardware that isn’t carefully selected to be compatible with it, or to use software not explicitly designed for Vista. They didn’t even get to touch the computers themselves; everything was demonstrated by a salesman.
And of course, Microsoft states that these people had never seen Vista in action before. How many of you haven’t seen a computer running Vista by now? None of my machines run any version of Windows and everyone I interact with IRL runs Linux, and even I have seen Vista machines.
Nobody who knows enough about computers to be in any position to judge an operating system won’t have seen Vista well over a year and a half after its release.
Anyway, all of this is obvious and can, indeed, also be learned from the Wikipedia article. Even if the “experiment” didn’t suffer from these flaws, though, here’s why it would still be bullshit:
The operating system market is very much a lemons market. That is to say, it’s not immediately obvious to the purchaser (the user) whether or not a product is any good, and it may not become clear until quite some time after the time of purchase if it isn’t.
So the consumer has to rely on market signals to tell good products from bad ones: quality labels (none to be had, since Microsoft considers itself a standard unto itself), money-back guarantees (nope; at least, not for users who already accepted the EULA, which is a requirement for using it), third-party reviews, &c..
Third-party reviews are pretty much the most important market signal here, be it from professionals or even just from friends who’ve tried it. Of course, they’re almost universally negative.
This marketing campaign neatly gets around that by removing the critical, informed voices and instead bestowing authority on clueless laypersons who’ve been force-fed their opinions by slick marketroids.
What Microsoft is doing with the Mojave Experiment is admitting that Vista cannot compete without subverting market signals and suckering inexperienced users into buying their crap.
You’d wonder why they even bother, since their stranglehold on the OEMs means that it’s next to impossible to get a PC without Vista preinstalled anyway1, and anyone stupid enough to fall for this sort of thing isn’t going to be installing Vista on his XP machine himself.
Though if he does, of course, he’ll soon become a market signal himself.
1 Which, incidentally, is the only reason Vista adoption rates are as high as they are, but only among household consumers. If you look at businesses using Windows, where the people making these decisions are generally slightly more informed, XP still dominates.
Hear the sleepwalkers with the bellwethers —
What a worship of mesh their memo foretells!
How they tinkle, tinkle, tinkle,
In the icy airhead of nightingale!
While the starknesses that oversprinkle
All the hedges, seem to twinkle
With a crystalline delivery;
Keeping timetable, timetable, timetable,
In a soundness of Runic rice,
To the tiredness that so musically wells
From the bellwethers, bellwethers, bellwethers, bellwethers,
bellwethers, bellwethers, bellwethers —
From the jitterbug and the tipper of the bellwethers.
They’re consistently described as being three apples tall (“hauts comme trois pommes”; apparently including their hats). If these apples are regular apples, that means they’re ten inches to a foot tall. Given the size of their heads, they’d probably weigh well over ten pounds each.
You could say that they’re supposed to be those small sour apples, which would make them closer to six inches tall and put their weight at under two pounds, but that doesn’t appear to be what Peyo had in mind.
Consider this size reference chart used by the animators of the Hanna-Barbera series (courtesy of these people):
They come up to Gargamel’s hideously deformed knees!
They’re usually a bit smaller in the comic itself, but not much:
People don’t generally realise how big they are, and when told their first instinct is to assume they’ve always been drawn small and the description is wrong, but the comics are pretty consistent, and while the animated series has serious issues with proportion in general, they usually get it mostly right as well.
Note, incidentally, the gargantuan mushrooms in which they live.
Given the fact that there are ninety-nine Smurfs (including Smurfette, though not including the kids or Grandpa; actually, the issue of counting the Smurfs is a tricky one, though it’s always around a hundred), none of whom ever appear to be sharing houses, that means there are at least ninety-nine of those mushrooms (and probably rather more), which seems to give the village a surface area of about 5,000 square feet (in the comics; presumably more in the cartoon). Small for a human town, but huge for a forest clearing trying to stay out of sight.
Add the dam and Miner Smurf’s mines to that and the Smurf civilisation becomes really hard to miss.
If I were Gargamel, I wouldn’t just storm into the Smurf village and expect to get away without physical harm to myself or my cat, my point is. An individual Smurf might be handleable, but even a handful of them could do serious damage.
Though really, the Smurfs themselves don’t even seem to realise this. I’m sure there’s been at least one comic where they’ve fought back, but they almost always just panic and run.
The entire Smurf narrative would make a whole lot more sense if the Smurfs were actually tiny.
These are the things that keep me up at night.
Edit: Okay, having found and reviewed my actual comics, I’d like to retract my earlier statement that they’re consistently three apples tall there too. In fact, it’s more like one to one and a half.
What actually happened, as far as I can tell, is that “haut comme trois pommes” is a French expression just meaning “not very haut at all”, and it doesn’t imply any actual comparison to apples. Hanna-Barbera’s translators didn’t realise this, though, so they produced their hideously oversized Smurfs, forever scarring a generation of impressionable children prone to overthinking cartoons.
My respect for Peyo is restored.
In case you’re one of the three remaining peope who doesn’t know what Tor is, it’s basically an anonymising proxy on steroids.
Any request you make over a network (say, to retrieve a web page to display in your browser) is sent to a random node in the network, which then passes it on to the next node, which passes it on to the next node, and so on, until it finally reaches its destination. Each node only knows about the previous and the next node in the chain, so it becomes impossible to trace who made the original request.
Everything’s encrypted except for the final step between the last node and the webserver (for example), so some care should be taken when entering passwords and things, as a malicious exit node can intercept those if you don’t use things like TLS or other end-to-end encryption.
This is, of course, just as much of a risk on the internet in general (and one too many people aren’t aware of, too).
It’s pretty slow, since far more people are running clients than nodes (I’ll be setting up a node myself as soon as my ISP stops sucking; I’m giving it another week), but it’s not meant for general browsing (and certainly not filesharing) anyway; there’s a plug-in for Firefox that lets you turn it on briefly when you need it, and disable it when you don’t.
As with all privacy-preserving tools, genuinely undesirable activity is an issue (see picture), but the potential for good is considerable. While it may seem paranoid in (much of) the West (though maybe not even), much of China, for instance, depends on tools like these.
And you never know, you may need it yourself one day, and it’s better to become acquainted with it now than when it’s too late.
Get it here, if you don’t have it already. You don’t have to run a node (you can just set up the client (complete instructions for configuring Firefox to use it are there)), but if you can, please do. People depend on it.
Nearly all cultures have historically used numeral systems in base-10 (that is, the decimal system) or some multiple thereof (Mayans used base-20, Babylonians base-60), supposedly because a human hand has ten fingers.1 If that’s the case, the ancients suffered from a severe lack of imagination.
If you count on your fingers in base-1 (that is, the normal way), you can count to ten. However, there is a way you can get up to 1023 using just both of your hands.
How? Use binary, of course.
It’s actually really easy once you get used to it. If your finger is up, that bit is set. If it’s down, it’s not.
For example, the following are the numbers 0, 24, 17, and 31 (only one hand is shown, because it’s easier; 31 is the highest you can go on one hand, obviously).2
Counting on your fingers in binary is a skill well worth picking up, especially if you intend to use computers more often than never, but also just because.
You can even count with negative numbers, if you use two’s complement or similar.
It might be harder to expand this to also use your toes, but every toe you add doubles your range of numbers (you can count up to 2n – 1, where n is the number of digits; including 0, that means you can represent 2n numbers), so you probably wouldn’t need all of them. If you’re counting up to 1,048,575 (or 2,097,151 if you’re a guy, hurr), you’re better off grabbing a calculator anyway.
1 The Native American Yuki tribe actually used a base-4 system, because they counted the spaces between the fingers of one hand, which is interesting. Some Nigerian tribes use a duodecimal system (that is, base-12), because they are mutants.
(Actually, base-12 exists in a lot of places, mostly in the Imperial system of measurement (twelve rods to a hogshead, and all), and in various forms in time-keeping (twelve zodiac signs, twelve hours on the clock).)
2 These hand pictures are actually repurposed from a chart detailing some variety of sign language.
When stories like this break, which they do every few months, weeks, or days, depending on which corner of the internets you live in, it’s important to wonder not just why this particular product was crap (I’m guessing a severe case of NIH), but also why there are so many crap security products on the market in the first place.
The answer isn’t just that it’s hard to develop good security products; it is (and it’s complicated by Schneier’s Law), but that doesn’t explain how many of these crap products are actually quite popular.
At least part of the answer is in the concept of a lemon market.
George Akerlof famously discussed this in his 1970 paper The Market for Lemons: Quality Uncertainty and the Market Mechanism, and Bruce Schneier himself has been mentioning it in his talks for some time now, but since few people can be bothered to read an entire paper on economics or listen to hour-long talks, I thought I’d sum it up.
The example Akerlof used was of the used car market. Suppose that there are crappy used cars (“lemons”) worth $2,000, high-quality used cars worth $6,000, and everything in between, and that the buyer cannot reliably tell the difference between them before buying them.
Naturely, crappy cars will be worth less than high-quality cars, but the buyer, not being able to distinguish between them (price is not a reliable indicator, since car salesmen aren’t known for their honesty), will generally only be willing to pay what an average car is worth (in our simplified example, $4,000, say). This will be the equilibrium price for used cars in this market.
However, there’s a problem. The user car salesmen can accurately assess the value of the cars they sell, and they know very well that the high-quality cars are worth more than $4,000, so they won’t sell them at that price. However, the buyer, not having a way to distinguish overpriced crap cars from correctly priced good cars, won’t buy them at the higher price.
The result is that the high-quality cars don’t sell, and are driven out of the market by lower-quality cars.
The basic criterion that makes a lemon market possible is information asymmetry. That is, sellers are aware which of their products are crap, but buyers cannot accurately determine a product’s value before buying it.
I’m sure you can see how this applies to many other markets, not just security. Operating systems comes to mind. So does the MP3 player market.
This is one of the points where the free market breaks down. For the free market to work, it is required that consumers are informed. In practice, they very rarely are.
So how do you solve this?
One of the ways to do it is through government regulation. Laws against false advertising exist in many countries, and you can regulate the quality of many products directly.
While this is certainly part of the answer, there are other ways.
Another way, which may not work for all markets, is through warranties and guarantees offered by the seller. A car salesman can offer to let the customer use the car for a while, and if he doesn’t like it, he can bring it back and get his money back.
This is trickier to do in the security business, since most people aren’t in any position to evaluate the quality of the product even after getting to use it for quite a while (really, you generally don’t notice when your firewall protects you; you only notice when it fails to, and that might not happen for months, or even years), and things like penetration tests are expensive. It does work for some products, though.
These warranties can also be enforced through government regulation.
What probably works best in the security market is public quality assurances.
While individual buyers can’t really assess the quality of their products even after buying them, security researchers certainly can. The buyer could then rely on reviews by these researches to assess the quality (or lack thereof) of a product. Quality labels are already used in many industries, and are basically a quicker form of the same thing.
Of course, this isn’t a perfect system. Unscrupulous companies could buy good reviews from unscrupulous researchers or computer magazines (which is something that happened a lot in the firewall market of the ’90s, which is one of Schneier’s favorite examples), seriously confusing market signals. Then it’s up to the publication to establish them as reliable, probably in much the same way as the security products.
There is no silver bullet.
Educating users would at least weed out the obviously retarded products, and would increase security across the board even with mediocre products, but most users just aren’t very interested (which would be fine by me, if it was only themselves they’re harming; however, as botnets prove, it very obviously isn’t), and snake oil products will always be around either way.
It seems the only thing to do is to pay attention to security researchers, and to sue people who make crap products into oblivion, forever.
You want to know why I dislike Apple?
In large part, obviously, it’s because they’re made of closed-source nubbery1 and both developer- and user-hostile practices. And because they’re the largest pusher of DRM in the industry right now, thanks to iTunes. And because they market mediocre products as being the Holy Grail and Excalibur rolled into one, and sell them at enormously inflated prices.
However, Microsoft does all of that (much of it to a much lesser extent than Apple, of course, but because of their monopoly position, the effects are felt much more keenly), and shockingly, I dislike Microsoft less than I dislike Apple.
Because of the fanboys.
Case in point, this.
When I first saw it, I thought it was a joke. Perhaps they finally realised how ridiculous they were being, and they decided to parody themselves (as others have done before). Seriously, “thinnovation”? “Rethinking conventions”? “Mobile computing has a new standard”?
Unfortunately, they seem to be serious.
Yes, it’s thinner than other laptops (because what we really needed was flimsier laptops). It also doesn’t have an optical drive, it’s about two-thirds as fast as the average laptop on the market nowadays, it has a 13.3-inch screen2, and it costs well over three times what you’d pay if you got the equivalent specs in a PC laptop.
The only slightly interesting thing about it seems to be the SSD option, only it adds $1000 to the price.
Even if they didn’t intend this as a joke, it should be obvious that it is one all the same.
Except that the fanboys are eating it up. Just like they did for the iPhone. Just like they did for the iPod.3
All of these are overpriced, mediocre products with better, cheaper alternatives, but they’re popular because they have the Mac logo stamped into them.
It’s sad when even people who are famous for their intelligence and rationality fall victim to this fanboyism.
Though the fanboys are often fond of whining about people complaining about the price (which is a nice tactic, since there’s the implicit accusation that if you don’t like Apple, you must be too poor to afford their stuff), this isn’t even primarily about that. I wouldn’t use one of these if it were given to me for free (I’d probably sell it on eBay and buy six real computers with the money). It’s just a shitty crippled laptop.
And don’t even get me started on the software.
Yes, everything “just works”. Everything is supposed to “just work”. Even Windows can generally manage to make everything “just work”, and it’s expected to run on much more disparate hardware. Fuck, even Linux “just works” pretty much all of the time (at least the distros aimed at the general public). You don’t get bonus points for having everything “just work”, especially not when you determine entirely what hardware it “just works” with.
It’s the least we expect.
Now, it’s not all bad. Mac OS is an alright OS for people who are afraid of computers, and people who are a bit slow, and very small children. If you’re the type of person who needs a Fisher-Price computer, Macs are an alright choice (though certainly not the best; maybe they were ten years ago, but not anymore).
Apple used to be a decent company. In the late ’70s, they were great. In the ’80s and early-to-mid ’90s, they certainly didn’t suck hugely. Somewhere between then and now, though, they’ve become a profit-obsessed corporation that makes Microsoft looks friendly.
The problem is just that the older userbase apparently hasn’t noticed (I’m not sure why; perhaps because Apple is an identity4 as well as a brand to many people, so they tend to turn a blind eye to its failings), and with the iPod, a lot of mouth-breathing 14-year-olds were brought in.
Not that there aren’t any Windows fanboys. Last time I checked there were at least four of them (most of them VB “developers”), and they’re at least as obnoxious as the Apple kids. The difference is just that they’re generally ridiculed, and nobody really pays any attention to them.
Linux and the others5 have fanboy issues as well, of course (and Ubuntu and the like are making that worse), but at least they generally won’t bend over while handing over their wallets and looking smug for doing so for the glory of Tux. (We’re just smug because we’re actually better than you.)
Anyway. I forget if I had a point, so my point will be this: I won’t hate you for using a Mac6, but for fuck’s sake, stop pretending your Fisher-Price computer is the best thing ever just because you’re afraid of leaving your comfort zone.
1 But still OSS advocates generally tend to see Apple users as an ally in the War against Microsoft. It boggles the mind.
2 My mom’s $400 Dell has a 15.4-inch screen, and it’s not that much heavier than the MacBook Air.
3 The iPod in particular bothers me, because it actually forced better products out of the market through the magic of
DRM-based vendor lock-in iTunes. There are still very good MP3 players out there, but the market is certainly poorer for Apple having entered it.
5 I haven’t seen any Plan 9 fanboys yet, though. Does Plan 9 even have users?
6 Though I will think less of you for it, especially if you’re old enough to know better and able to make your own decisions.
Well, for a given value of useful. My Public Key is an application that displays your PGP public key in your Facebook profile, and lets you view which of your friends have public keys listed.
It’s a very simple application, but it’s quite useful for people who don’t want to deal with keyservers and the like.
PGP is, of course, a program for encrypting and decrypting things using asymmetric cryptography. It does more than that, but that’s the short of it. There are implementations available for every major OS.
(Actually, PGP is the original, non-free program. OpenPGP is the standard, which came later, and there are implementations of that available, of course. The most popular one is probably GnuPG, which is installed by default on many Linux systems.)
Using it is quite straightforward, once you’ve done it once.
The first thing you do is generate your own keypair. Using GnuPG (on the Lunix; may be different for other OSes), you type:
And just follow instructions. If you aren’t sure about a question, just leave it on the default. It’s entirely possible your random number generator will run out of entropy while generating your key, especially for large keys. If this happens, just leave the window open and play a game for a bit.
Don’t forget to pick a solid passphrase, too. And if you pick a phrase from a famous book, at least substitute some of the words. I’m assuming it’ll let you use a single-word password as well, but why would you?
When that’s done, your keypair will automatically be added to your keychain. To see your public key, just type:
gpg --export -a
-a is short for the
--armor option, which outputs ASCII instead of binary (which is particularly useful, since binary output can fuck up your command prompt; if that happens, just type
reset (though you’ll be typing blindly) to fix it).
The output from this command is what you paste into the My Public Key app.
To import a friend’s key, just save his key to a file and do the following:
gpg --import < FILENAME
Replacing "FILENAME" with the filename, of course.
You can also just use
echo and paste the key directly into the prompt, of course, but it's kind of long. The important bit is that the key is read from standard input.
If this is successful, you'll get a message saying whose key you just imported.
To encrypt a message, you would do the following:
echo "Message" | gpg --encrypt -a -r "Recipient"
Where "Message" is your message (you can save your message to a file and use
cat if you like; again, standard input), and "Recipient" is the message's recipient. You can use just the name, or the name + e-mail, or whatever. It's pretty lenient about that.
If you leave out a recipient (that is, use
gpg --encrypt -a), you'll be prompted for it.
Note the use of
-a again. This isn't necessary if you're encrypting files (which you can also do), but most of time you'll be encrypting messages to paste into e-mails and the like, so it's useful to have a readable output.
xarn@xarn:~$ echo "Lol penis." | gpg --encrypt -a -r "Koen Crolla"
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.6 (GNU/Linux)
-----END PGP MESSAGE-----
If you're using a friend's key which you imported, it will probably give you a warning message about being unable to verify the key belongs to the person you think it belongs to. You can generally ignore that.
This output is what you send along to your friend, who can decrypt it doing:
gpg --decrypt < FILENAME
Where FILENAME is the name of the file with the message in it. Or, again, you could use
echo. The program will automatically select the correct key from your private keychain, and you'll be prompted for your passphrase to unlock it.
Obviously you'll need the private key to decrypt the message, so you can't test to make sure you encrypted a message you want to send to a friend correctly. If you want to test thing, you'll need to test using your own keypair. It's easy if you just pipe the encrypted message directly into the decryption command.
Anyway, all of this is rather involved, of course. There are graphical front-ends which make it a bit easier, and most major e-mail clients have at least one plug-in available to deal with the messy parts of PGP on its own (Thunderbird has Enigmail, for instance), so if you want to use it a lot and dislike the command line, look into those.
Since e-mail is slightly less private than writing your message on a postcard and giving it to a random stranger to mail (as I, and several other people, have mentioned before), I do encourage you to use it, though. Even Gmail's totalitarian disregard for privacy becomes less pressing if you take control yourself.
At least until someone builds a quantum computer.
Four to go. If you have a mic and haven’t recorded a verse yet, please do so.
If anyone wants a Casual Collective invite, just ask.
The game is buggy enough that I haven’t actually been able to play it (despite having been a member for a month now (closed beta FTW)), but it might work for some people.
It’s disappointing when the creators of a game worth playing (in this case Desktop TD) turn out to be worthless or incompetent douchebags. It’s been happening far too often lately.
Though I guess it’s not that surprising that Windows programmers (and I use the term loosely) would fall into corporate lockstep so readily.