Or you can create our own barcode, and maybe mess around with it. The Parkrun runner barcode is just the runner's unique ID number, encoded as a Code 128 (code set B) barcode - as far as I know the format is a single letter and 6 or 7 digits. There's plenty of software that can print barcodes - for example, you can use GNU barcode to generate one:
barcode -b A9876007 -e128b -o barcode.ps
Just for fun, you can write your name on top of the barcode - as long as you leave enough space unmolested for the barcode reader to scan. This script uses Imagemagick to do that:
#!/bin/bash
barcode -b A9876007 -e 128b | \
convert -density 300 \
- \
-flatten \
-crop 630x360+0+2920 \
-font Ubuntu-Bold \
-fill DarkRed \
-stroke white \
-strokewidth 4 \
-pointsize 21 \
-annotate +14+3010 'Parkrun Paul' \
\
out.png
which produces a barcode like this
You can print that out (on an A4 printer), to the same scale as the official Parkrun barcodes generated by their website, with this command:
lpr -o scaling=14 out.png
One needs to wrap PyCrypto's own HMAC with a little function that handles the password and salt given by PBKDF2 and which emits the digest of the resulting hash. This example illustrates using the basic hash functions available from PyCrypto itself (several of which aren't suitable for practical use, and are just included here as examples):
#!/usr/bin/python3
from binascii import hexlify
import Crypto.Protocol.KDF as KDF
from Crypto.Hash import HMAC, SHA, SHA224, SHA256, SHA384, SHA512, MD2, MD4, MD5
import Crypto.Random
password = "A long!! pa66word with LOTs of _entropy_?"
for h in [SHA, SHA224, SHA256, SHA384, SHA512, MD2, MD4, MD5]:
hashed_salted_password = KDF.PBKDF2(password,
Crypto.Random.new().read(32), # salt
count=6000,
dkLen=32,
prf=lambda password, salt: HMAC.new(password, salt, h).digest())
print ("{:20s}: {:s}".format(h.__name__,
hexlify(hashed_salted_password).decode('ascii')))
In a speech on TV in 1993, three years before A Game of Thrones (the series' first book) was published, Yeltin said "You can build a throne with bayonets, but it's difficult to sit on it" (reference). The normally pretty thorough TV tropes doesn't have like it that predates either (trope: Throne made of X) - obviously both Yeltin's and Martin's thrones owe a lot to the Sword of Damocles.
To Martin, of course, it's a fun fantasy in a magical land of boobies and dragons. To Yeltsin thrones were no abstract plaything - for him, as for Henry VI, uneasy lies the head that wears a crown. Perhaps uneasy sits the bum underneath it too.
A lot of the driving you do in your car is ancillary to owning a car, rather than being to service the things you want. If the car can drive itself around, some of that goes away. And there's some routine stuff it can do too. Consider these possibilities:
This last part is maybe the best part. Why does your office, or your apartment building, even have parking next to it? With self-driving cars, it doesn't need to - it can contract parking from some parking lot blocks, even miles away. And valet parking (which is what happens when all the cars in the lot are self-driving, and are controlled by the lot's valet manage system) can manage where every car is kept. With cars parked millimetres apart (no need to open the doors) bumper to bumper, it can manage double or triple the density of a normal lot. You tell it when when you'll need it, and the system can make sure the car is shuffled around when it's time. If you have a genuine emergency (which means you can't wait five minutes for an emergency shuffle to occur) the lot will send a taxi.
With self-driving cars, particularly electric ones, the future may spell bad news for parking attendants, taxi drivers, and filling station cashiers.
Adapted from my reddit post of yesterday.
Pono promises high quality, high definition digital portable music. CD-DA isn't a perfect format, and munged through mp3 codecs it's a bit worse, with data compression, loudness, range compression, and simply there (usually) being only two channels. In an ideal listening environment, with decent speakers and a quiet room, this might possibly matter. But in a portable environment it just doesn't.
The track record of better-than-CD-quality digital audio isn't a good one anyway. HDCD, SACD, and DVD-A never made much of a dent. In part people just don't care enough (perhaps they're ignorant of the wonders they're missing, perhaps not), and in part copyright holders are reluctant to put such a high-quality stream in their customers' hands (knowing how readily media DRM schemes have been cracked). Perhaps it's time for another crack at getting a high quality digital music standard off the ground. Perhaps iTunes and SoundCloud and Facebook have altered the music marketplace enough that there's room to market better product right to the consumer. A coalition of musicians, technologists, and business people maybe can establish a new format, one that can be adopted across a range of digital devices and can displace mp3 (which is, I'll happily admit, rather long in the tooth).
But portable media is a dumb space in which to do it. And building your own media player is dumberer. People listen to portable players when they're in entirely sub-optimal listening environments. They're on the train, in the back of the car, they're in the gym or they're walking or their running down the road. There's traffic noise, wind and rain, other people's noises, and the hundred squeaks and groans of the city. And they're listening on earbuds or running headphones or flimsy Beats headphones. The human auditory system is great at picking out one sound source from the miasma, but it can only do so for comprehension not for quality too. High quality audio in all these enviroments is wasted - it's lost in the noise.
When I started running, I had a hand-me-down Diamond Rio PMP300. It came with only 32MB of internal storage - that's barely enough for a single album encoded at a modest compression level. So I broke out Sound eXchange and reduced my music to the poorest quality I could manage, and eventually reduced it to mono too. With that done, I could get a half-dozen or so albums on the player, enough that I could get enough variety that each run didn't just follow the same soundtrack. With skinny running headphones, the wind and the rain, the traffic noise, and the sound of my own puffing and panting, it just didn't matter that the technical qualities of the sound were poor. And now, a decade later, when I have a decent phone that plays high quality mp3s, it doesn't really sound any different in practice.
So even if people buy Ponos, even if media executives decide to sell them high-definition digitial audio, if they pay (again) for all the music, in the environments most people will be using their Pono, the difference will be, in practice, inaudible.
For each entry I show the time (into the UK DVD edition of the film, in minutes:seconds), the line of dialog roughly about that time, a description of the streets shown, and a Google Maps link (you might need to rotate the camera in a few of them):
#!/usr/bin/env python3
from gi.repository import Gio
bg_settings = Gio.Settings.new("org.gnome.desktop.background")
bg_settings.set_string("picture-uri", "file:///tmp/n2.jpg") # you need the full path
# you can also change how the image displays
bg_settings.set_string("picture-options", "centered") # one of: none, spanned, stretched, wallpaper, centered, scaled
You'd think that it would be just as straightforward to do this on KDE4; it was possible with pydcop on KDE3, but I can't find a way of doing on KDE4 with dbus. I've seen some postings which suggest that they're almost at the point of adding support for this.
magic(5)
for file(1)
on my machine doesn't recognise the format of superblocks which make up a Linux RAID, instead simply reporting them as "data".This Python2 program dissects the header of a given block and shows some information about it and the RAID volume of which it is a constituent.
#!/usr/bin/python
"""
Linux RAID superblock format detailed at:
https://raid.wiki.kernel.org/index.php/RAID_superblock_formats
"""
import binascii, struct, sys, datetime, os
def check_raid_superblock(f, offset=0):
f.seek(offset) # start of superblock
data = f.read(256) # read superblock
(magic,
major_version,
feature_map,
pad,
set_uuid,
set_name,
ctime,
level,
layout,
size,
chunksize,
raid_disks,
bitmap_offset # note: signed little-endian integer "i" not "I"
) = struct.unpack_from("<4I16s32sQ2IQ2Ii",
data,
0)
print "\n\n-----------------------------"
print "at offset %d magic 0x%08x" % (offset, magic)
if magic != 0xa92b4efc:
print " <unknown>"
return
print " major_version: ", hex(major_version)
print " feature_map: ", hex(feature_map)
print " UUID: ", binascii.hexlify(set_uuid)
print " set_name: ", set_name
ctime_secs = ctime & 0xFFFFFFFFFF # we only care about the seconds, so mask off the microseconds
print " ctime: ", datetime.datetime.fromtimestamp(ctime_secs)
level_names = {
-4: "Multi-Path",
-1: "Linear",
0: "RAID-0 (Striped)",
1: "RAID-1 (Mirrored)",
4: "RAID-4 (Striped with Dedicated Block-Level Parity)",
5: "RAID-5 (Striped with Distributed Parity)",
6: "RAID-6 (Striped with Dual Parity)",
0xa: "RAID-10 (Mirror of stripes)"
}
if level in level_names:
print " level: ", level_names[level]
else:
print " level: ", level, "(unknown)"
layout_names = {
0: "left asymmetric",
1: "right asymmetric",
2: "left symmetric (default)",
3: "right symmetric",
0x01020100: "raid-10 offset2"
}
if layout in layout_names:
print " layout: ", layout_names[layout]
else:
print " layout: ", layout, "(unknown)"
print " used size: ", size/2, "kbytes"
print " chunksize: ", chunksize/2, "kbytes"
print " raid_disks: ", raid_disks
print " bitmap_offset: ", bitmap_offset
if __name__ == "__main__":
if os.geteuid()!=0:
print "warning: you might want to run this as root"
if len(sys.argv) != 2:
print "usage: %s path_to_device" % sys.argv[0]
sys.exit(1)
filehandle = open(sys.argv[1], 'r')
check_raid_superblock(filehandle, 0x0) # at the beginning
check_raid_superblock(filehandle, 0x1000) # 4kbytes from the beginning
and here is additional magic(5)
data to recognise RAID volumes. It's incomplete - I've only been able to test it with RAID volumes I've been able to create myself.
# Linux raid superblocks detailed at:
# https://raid.wiki.kernel.org/index.php/RAID_superblock_formats
0x0 lelong 0xa92b4efc RAID superblock (1.0)
>0x48 lelong 0x0 RAID-0 (Striped)
>0x48 lelong 0x1 RAID-1 (Mirrored)
>0x48 lelong 0x4 RAID-4 (Striped with Dedicated Block-Level Parity)
>0x48 lelong 0x5 RAID-5 (Striped with Distributed Parity)
>0x48 lelong 0x6 RAID-6 (Striped with Dual Parity)
>0x48 lelong 0xffffffff Linear
>0x48 lelong 0xfcffffff Multi-path
>0x48 lelong 0xa RAID-10 (Mirror of stripes)
0x1000 lelong 0xa92b4efc RAID superblock (1.1)
>0x1048 lelong 0x0 RAID-0 (Striped)
>0x1048 lelong 0x1 RAID-1 (Mirrored)
>0x1048 lelong 0x4 RAID-4 (Striped with Dedicated Block-Level Parity)
>0x1048 lelong 0x5 RAID-5 (Striped with Distributed Parity)
>0x1048 lelong 0x6 RAID-6 (Striped with Dual Parity)
>0x1048 lelong 0xffffffff Linear
>0x1048 lelong 0xfcffffff Multi-path
>0x1048 lelong 0xa RAID-10 (Mirror of stripes)
I'd long been familiar with the idea that one could do so from NAND gates (building a complete NAND logic) and similarly from NOR gates (producing a complete NOR logic). Indeed, every current digital electronic system is build from one of these two schemes. But I didn't know about the completeness of implies-logic, and I'll confess to being a bit intimidated by Principia Mathematica. So I figured I'd work through building the system myself. Here goes.
We start with the material implication operator →, which has the following truth table:
p | q | p → q |
T | T | T |
T | F | F |
F | T | T |
F | F | T |
More of interest to logicians than electronic engineers, note that x → x is always True, for either value of x. So, logically speaking, it's fair to say that material implication gives rise to True (that having True a priori isn't necessary); the same isn't true for False. Someone building an actual digital circuit with memristors isn't really going to care, because a logical True and False (presumably a high and low voltage feed) are always readily available anyway. I don't know enough about the actual implementation of a memristor → gate to know whether, if you just tied the two input lines together (and not to an input line from elsewhere in the circuit), you'd actually get a consistent True level out of it (but I'm guessing you wouldn't).
With that, we can build an OR gate (denoted as ∨)
p ∨ q is (p → q) → q
And negation, an inverter gate (denoted as ¬):
¬p is p → F
With not and negation we can build a logical NOR (denoted with Peirce's arrow ↓)
p ↓ q is ¬(p ∨ q)
so
p ↓ q is ((p → q) → q) → F
...and from that point we could follow the pattern of NOR logic and build the rest of the system. Just for completeness, we can build and (∧) and nand (↑)
p ∧ q is (¬p) → (¬q)
p ↑ q is ¬(p ∧ q)
Finally we can construct xnor (logical equality, denoted =) and exclusive or (xor, denoted ⊕)
p = q is (p ∧ q) ∨ (p ↓ q)
and
p ⊕ q is ¬=
Apparently Anonymous, that vaguely extant hacktivist group that lazy journalists like to write about when they're Googling for news rather than going out and doing their actual job, has threatened to declare "cyberwar" on the fancifully named Democratic People's Republic of Korea [link].
Cyberwar on North Korea? That seems as likely to be effective as declaring cyberwar on House Lannister.
Given Valve's fondness for being silent until they're sure they're in a good position to deliver, it's easy to go from casual imputation to wild fantasy. So, without any evidence at all, here's my wild guess about the future of each of Valve's core game titles:
Why would Valve do this, when they're already swimming in money, and when each franchise is individually healthy (if we consider Half Life's lengthy state of hybernative naptosis to be healthy? Because Valve need to sell as many of their forthcoming Steam Box PC/console/TV hybrid thing as possible. And rather than simply boast a bunch of sequels as launch content, weaving the whole thing into a major event can only enhance the press coverage and gamer hunger for the thing.
If they do this, Valve's track record suggests they'll tease it somehow, perhaps by an ARG, or by some subtle under-the-wire DLC. With everything already updated on Steam for technical reasons, it'd be easy for them to alter some noticeboards or leave the odd corpse from one game in another.
Now, that's thinking with portals.
It's at this time of year that millions of college-age Americans return to the family home and to their childhood place in the family structure. Some can decide whether to sit at the kids' table or the adults'. If they choose the adult table, they'll hear an hour-long lecture about how Barack Obama is an evil alien who is out to destroy America; if they sit at the kid table, they'll hear an hour long lecture about how Megatron is an evil alien who is out to destroy America.
Does this mean that Barack Obama is really Megatron?
As with last year, I did the Great Scottish Run (a half marathon around Glasgow) on Sunday. This time I did it in 2:12:15, three minutes slower than last year.
This year the winning time was 1:03:14, with the mean finishing time 2:02:54 and the most popular finishing time 1:57:44. The graph showing the distribution of runners (new improved with labels) is very much like last year's distribution:
The Python2 code to generate the graph is below. It'll probably need to be tweaked for subsequent years, as they're not very consistent about the CSV data dump.
#!/usr/bin/env python2
# config parameters
# a dictionary giving the runner number and the colour we're going to draw their line as - their
# name is extracted from the CSV ffile.
runners_to_show = { '19340': {"colour": 'red'},
}
RESOLUTION=15 # how many seconds correspond with each horizontal pixel
VSCALE=3 # vertical multiplier
# ##############################################################
import csv, sys, Image, ImageDraw
# overall stats
mintime = 10000
maxtime = 0
count = 0
totaltime = 0
# gender totals
boycount = 0
boytotal = 0
girlcount = 0
girltotal = 0
# the census has one bucket for each "slice"
census = [0]*(RESOLUTION*1000)
most_popular_time = 0
most_popular_count = 0
def parsetime(t):
"convert an h:m:s time into a number of seconds"
h,m,s = t.split(':')
return int(h)*3600 + int(m)*60 + int(s)
def unparsetime(t):
"convert a number of seconds into a hh:mm:ss string"
t = int(t)
hours = t/(60*60)
t -= (hours*60*60)
mins = t/60
t -=(mins*60)
return '%d:%02d:%02d' % (hours,mins,t)
try:
infile = open ('2012GSRHalfMarathon.csv','r')
except IOError:
print "error opening input file"
sys.exit(1)
reader = csv.reader(infile)
for row in reader:
place, number, time, forename, surname, gender, age, club, split1, split2, split3, filler = row
# do some stats
sectime = parsetime(time) # time in seconds
if sectime < mintime: mintime = sectime
if sectime > maxtime: maxtime = sectime
count += 1
totaltime += sectime
if gender == 'M':
boycount += 1
boytotal += sectime
else:
girlcount += 1
girltotal += sectime
# is this row the entry for a runner we're particularly interested in
for k in runners_to_show:
if k == number:
runners_to_show[k]['name'] = forename+" "+surname
runners_to_show[k]['time'] = time
runners_to_show[k]['sectime'] = sectime
print 'found runner', k, runners_to_show[k]
# keep a census for each possible finishing time
index=sectime/RESOLUTION
census[index] += 1
if census[index]> most_popular_count:
most_popular_count = census[index]
most_popular_time = sectime
# show the results of our pass through the data
print 'mintime', mintime, unparsetime(mintime)
print 'maxtime', maxtime, unparsetime(maxtime)
meantime = totaltime/float(count)
print 'meantime', meantime, unparsetime(meantime)
print 'most popular finishing time: %s (%d people)' % (unparsetime(most_popular_time),
most_popular_count)
# render an image, with a histogram for the census
minbucket = mintime/RESOLUTION # the leftmost bucket
image_width = (maxtime-mintime)/RESOLUTION
image_height = (most_popular_count*VSCALE) + 50 # 50 is padding at the top
im = Image.new('RGB', (image_width, image_height), '#ccf')
draw = ImageDraw.Draw(im)
# draw the overall histogram
for x in xrange(image_width):
draw.line([(x,image_height),
(x,image_height- (VSCALE*census[x+minbucket]))],
"black")
texttop = 5
# draw mean line
x = (meantime-mintime)/RESOLUTION
draw.line([(x,image_height),
(x,11)],
'#FF7F00')
draw.text((x+3,texttop),
"mean " + unparsetime(meantime),
'#FF7F00')
texttop += 12
# draw each of the specified runners' times
for racenum,data in runners_to_show.iteritems():
x = (data['sectime']-mintime)/RESOLUTION
draw.line([(x, image_height),
(x, texttop+6)],
data['colour'])
draw.text((x+3,texttop), "%s [%s] %s" %(data['name'],
racenum,
data['time']),
data['colour'])
texttop += 12
del draw
im.save('graph2.png')
edit: I later discovered that the runner who was being treated, Aubrey Smith, died. There's a weird fraternity between runners, and when one of us falls we all hurt.
Apple's page about the replacement program calls it a "safety risk" without providing much detail; Wikipedia's article mentions a few overheating events (but really not a lot). Still, Apple are clearly worried about their liabilities. So they took back the old one (a black 2GB model) and they've sent a new 8GB silver one.
It's slightly strange to get an Apple product like this. There's none of the usual Apple "unboxing" experience, because there's no box at all. The Nano came in a generic shipping box (one downright Brobdingnagian when compared with the tiny size of the player). No manual, no cable, no software, just the tiny Nano with a serial number sticker.
I'm surprisingly sentimental about technology. I still have every mobile phone I've ever owned (though I've given one to a relative) and, until this, every mp3 player too, right back to a Diamond Rio PMP300. If it hadn't been for the fire risk, which really prevents me from lending the old thing to someone else, I'd probably have kept it.
Still, the new Nano is an impressive little thing. It weighs very little (it's almost light enough that one could keep it only on the cable), the built-in clip is a nice idea, and the touchscreen works nicely. The build-in pedometer doesn't compare very well with a decent Oregon Scientific one, however - it seems to be rather arbitrary and unresponsive.
The most elegant solution to the calendar I've seen is JRR Tolkien's (yes, him) Shire Calendar:
Adapted from my Slashdot post here
I've never competitively run further than the 12km Bay to Breakers, but I'd prepared pretty thoroughly. On the day the various suspect joints that I feared might let me down all performed perfectly, but I messed up a bit on the hydration plan, and an uncharacteristically hot Glasgow left me rather seriously dehydrated by the end (but that's no excuse, particularly for the 5533 people who finished faster than I). I did it in 2 hours 9 minutes, which was about what I estimated beforehand, but I'd have done a deal better had I drunk properly beforehand.
I'm especially grateful to the kind folks from the Glasgow Sikh community, who set up their own unofficial water station (amid an inexplicable five-mile gap between official water stations) and who had some Kärcher pressure washers to douse the overheated runners.
I've been dehydrated running before, but never this badly, so I was in a pretty feeble condition afterwards. It's unpleasant.
A Google Map showing a GPS trace of the route on Google Maps is here (it's weirdly punctuated in the latter section, I guess due to the water spraying).
The race organisers have placed a CSV of all the half-marathon finishers (with splits) in a machine-parsable file here, so I've had some fun grinding through them (it's not like I'll be walking anywhere today).
Of the 8482 runners who finished (they don't give statistics for non-finishers) 5187 (61%) were men and 3295 women. The male winner was Kenyan runner Joseph Birech in an eye-watering 01:01:26, and the female winner was fellow Kenyan Flomena Chepchirchir in 01:09:26 (beating, I believe, her PB).
The mean time for the whole field was 02:03:32, with a peak of people finishing around 01:57 (what, did you guys hold hands?). For men only the mean time was 01:57:20, for women only the mean time was 02:13:17
My own time put me 6 minutes slower than the overall mean, and 12 minutes slower than the male mean. A graph showing the distribution of runners' finishing times is below - the overall mean is the green line, I'm the puffy and out-of-breath red one.
In the last 6k I overtook 176 people but 203 people overtook me (so that's a net 27 people who'd taken better care to drink properly). Of these runner #13180, Angus Denham, went from being 5 minutes behind me to finishing nearly 3 minutes ahead. Perhaps Angus is really Batman.
Of the whole race, the most impressive kick (the fastest last section in relation to their first 15k) was by runner #19069, Gary Clelland (splits 00:34:05, 01:20:14, 02:04:03) who seems to have had a horrible middle race, but recovered to do the last 6k in 31 minutes.
BEEP!
There are two alarms in my house. They're mains wired with a 9V battery backup, and when that battery starts to fail (it seems when it gets below about 7.7V) the alarm beeps once, very briefly. The beep is frustratingly irregular, and there's at least several minutes between beeps. There's no light or other visual indicator on the alarm which is failing. And (as anyone who has heart the siren of an emergency vehicle echoing around the buildings in a densely built up city) it's difficult to spacially locate a high-frequency tone that's echoing off hard surfaces.
BEEP!
The alarms are mounted up high, on the ceiling, they're difficult to remove (push a little tab with a knife while turning the whole thing anticlockwise), and the screwthread is a bit overpainted. There's residual capacitance in an alarm, so even if you remove its battery and disconnect it from the ceiling supply, it will still muster the energy to beep a few more times. I think I know which alarm it is, so I take it down, open it, remove its battery, find a spare 9V, replace that, restore the detector to the ceiling, and go back to bed. Mission accomplished. All is peace and quiet. Sleep.
BEEP!
Darn - I must have changed the wrong one. Find another 9V battery. Remove the second alarm (which is much more intransigent than the first), fix it, replace it. Now they're both done. Sleep.
BEEP!
It's 3:30 and this is impossible. Can I have put a bad replacement into one of them - but which? I take them both down, and find the Fluke DVM. How are "normal" people supposed to solve this kind of problem in the early hours of the morning (where "normal" people don't have a stash of 9V batteries and a digital multimeter) ? I test both "good" batteries - they're showing slightly more than 9V. The two "bad" ones aren't much different - one is 8.3V and the other 8.7V. I don't have any more batteries to try, it's late and I need to work tomorrow, and there's obviously nowhere open that will sell me two 9V batteries at 4am. I can't think. I disconnect both batteries and leave both alarms on the desk. I pile clothes on the alarms to deaden the sound (knowing they'll beep for a while until their caps have discharged), close all the intervening doors, and go to sleep.
BEEP! BEEP! BEEP! BEEP! all ... night ... long ...
In the morning, when I feel only a little bit more awake, I still can't figure out the problem. There's still a periodic beep, and it can't be either of the alarms. As I stumble around, another BEEP! And then I find it - a third alarm, this time a carbon monoxide detector, hiding in the cupboard beside the heating system. Its battery is 7.2V, and given the 8.7V one it's happy.
It's stuff like this that makes well-meaning people throw their smoke alarms away, or at least permanently neglect to put batteries in them. I understand why it has to beep, and why it can't flash a permanent light to say it's in trouble (they're trying to maximise the time of protection I get, even at the expense of my sleep). It's a very simple and cost-optimised device, so it's difficult to think of a sensible way for it to communicate. All it can do is beep.
But do all three detectors have to beep the same? The beep function is a simple piezo electric sounder driven by a simple frequency generator (I guess an XTAL) and gated by a transistor. So:
So yes, a VPN is just what you need. To set one up will need the help of whoever manages your company network. They may choose to configure VPN functionality on existing hardware, or to install additional hardware or software. They'll have to worry about opening a special connection in the firewall so VPN clients can phone in to establish their sessions, and they'll have to think about how to manage who gets to login and how to authenticate they are who they say they are.
dd
is one of a unix administrator's best friends - it's great for making block-level backups, secure erasure of confidential data, moving a filesystem, or generating test data. But it's pretty common to set
it off on a large job and come back later and have no clue about how much progress it's made. John Newbigin's dd for Windows has a non-standard --progress
option, but with the standard one it's not so straightforward.The GNU dd
has a nice feature whereby it prints its progress if you send it a SIGUSR1 (e.g. kill -USR1 1234
). But if the dd process is wrapped in other processes, or isn't attached to a console, then you can't easily get this information.
Say we're running dd if=/dev/zero of=/dev/sdc
How do we figure out how far it's gone?
To the rescue comes the admin's other best friend, Vic Abell's invaluable lsof
lsof -c dd | grep sdc
reports
dd 2662 root 1w BLK 8,32 0x187e768000 5482 /dev/sdc
The key part of that is 0x187e768000
, which is the offset it's reached, in hexadecimal.
To simplify checking that, the following quick and dirty Python script (which takes the filename or device to search for as its only argument) will report the offset field in the lsof output, handily converted to Mb and Gb.
#!/usr/bin/python
import subprocess,sys
if len(sys.argv) !=2:
print 'usage:\n %s <output file or device>' % sys.argv[0]
sys.exit(1)
r = subprocess.check_output('lsof -c dd | grep %s' % sys.argv[1], shell=True)
val = r.split()[6]
if val.startswith('0x'):
base=16
else:
base=10
v2 = int(val,base)
print '%d Mb (%d Gb)' % (v2/1024**2, v2/1024**3)
So if the python script is called dd_progress.py
and we run dd_process.py sdc
the output will be:
100327 Mb (97 Gb)
Some nice additions: dd
can hog the IO bandwidth if given reign, so run it with ionice dd ...
so you can still work on the same machine. Rather than continually run dd_progress.py
by hand, one can leave it running periodically in another window like this:
watch dd_progress.py sdc
If history was like computer games, the Second World War would have ended when Franklin Roosevelt and Winston Churchill sneaked into Hitler's castle (through a ventilation duct) and repeatedly shot a small glowing area of Hitler's neck with their rocket launchers and bakelite laser guns.
If computer games were like history, a game of Starcraft would consist of years of committee meetings about increasing bauxite production and improving the machine tool lubricating oil supply chain, and you'd get an email once a week giving you a statistical breakdown of the friendly and enemy units destroyed in the conflict.
In the short run they're looking at making a phone that's a few mm thinner than the current ones. But in the longer term they're thinking beyond what we currently call a "phone". They're looking at very small form factor devices which keep their data in the cloud, are configured by another (arbitrary) device which talks to the same cloud, and which make either sporadic or continual data connections with whatever available networks they find, to keep up to date. Imagine very small devices (wristwatches, eyeglasses, earplugs) with 802.11/UMTS/WiMAX radios (which use a mini-sim to identify themselves to whichever network they encounter). And they're thinking about these things as universal identifiers and payment tokens.
Right now you go running with an iPod. Instead you'll have a iPlug, a pair of little in-ear headphones, but with no cable and nothing strapped to your arm. You set up your music program on a tablet, and it seamlessly syncs. You run further than you'd expected, so the iPlug connects to the network and downloads more music. Miles from home your knee gives out. You touch the iPlug and say "taxi". A taxi comes (sent by Apple to the location the iPlug knew; Apple gets a dollar from the taxi fare, which you pay using the iPlug).
You have a iSim unit in your iWatch. You're thirsty, so you touch the watch and say "coffee shop". The watch face shows an arrow to a nearby one, and the distance, and walks you there. Apple gets a dollar. You buy a drink with the iSIM as a payment token (Apple gets 30 cents) and sit down at a table. The table's surface is an active display; it talks to your iWatch and opens a connection to your account in the iCloud. Your personal news appears, your emails, your documents. You do some work, browse some stuff, and when you're done you stand up and the table blinks off. Things will be as you left them when you next peer with an active display - at home, in the car, on the train, at the office, on the beach.
All of this stuff has been done, in various disconnected ways, already. You can pay for stuff with your phone, in some places. Most Europeans (well, Brits at least) have smart cards in their credit cards. You could hotdesk 10 years ago with a SunRay (kinda). You can unlock doors with a Dallas iButton token. Having super-cheap super-light totally ubiquitous networking makes the whole thing join up into a compelling, powerful, system.
You'll never be alone again.
Adapted from my Slashdot posting here. Several posters astutely point out that tiny devices have tiny batteries and so short lives. For a device that's mostly connected to local (low-power) connections, that isn't configured to receive calls (which means it doesn't turn the radio on every few seconds to check for messages or calls) and which you habitually dock to recharge every day, this isn't an unreasonable ask.
To use it you need to plug your guitar into your Linux PC. It's possible to do this with a direct connection but the signal will be weak and noisy. It's much better to use a proper boosted instrument connection - there are many of these, but this posting talks about setting things up with the inexpensive Behringer UGC102 USB device. Various online merchants sell this for around £30.
Once you've got that, the following guide lets you get Rakarrack working in Linux. I've tested these instructions in Ubuntu 11.04 (and it mostly works the same in earlier Ubuntu releases); hopefully things should be much the same in a similar modern Linux like Fedora.
apt-get
) install the package rakarrack
(which will entail you enabling the "universe" repository, if you haven't done so already). By installing rakarrack you also bring in its dependencies, including JACK audio system and its associated utilities.rak.sh
) and make sure that script is executable.#!/bin/bash
# The following assumes that the Behringer UGC102 is ALSA device #1
# if it isn't, run aplay -l to see what it is
jackd -R -dalsa -D -Chw:1,0 -n3 &
sleep 1
rakarrack &
sleep 1
# mono Behringer inputs to rakarrack stereo input
jack_connect system:capture_1 rakarrack:in_1
jack_connect system:capture_1 rakarrack:in_2
# rakarrack stereo output to both main outputs
jack_connect rakarrack:out_1 system:playback_1
jack_connect rakarrack:out_2 system:playback_2
# wait for rakarrack to be closed
wait %2
# kill JACKd to tidy up
killall jackd
Terminal
run aplay -l
which will list the audio devices that ALSA can see. A typical report will look something like this:
**** List of PLAYBACK Hardware Devices ****
card 0: Intel [HDA Intel], device 0: ALC883 Analog [ALC883 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: Intel [HDA Intel], device 1: ALC883 Digital [ALC883 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: default [USB Audio CODEC ], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0 <
In this case "card 1" is the Behringer device (it may vary on your system). 1
in hw:1,0
in the script to the appropriate number.pasuspender --
to the start of the jackd line in the script.Merchants that make repeat payments on the same credit card have to keep the customer's credit card info around. This makes breaking into them a tantalising prospect for any attacker, as the merchant stores enough information to allow fresh payments to be made at any merchant, or for fake physical cards to be manufactured. This posting discusses two proposals that allow merchants to make automated repeat payment requests on the same credit card, but without the merchant's permanent database storing that crucially re-usable credit card information.
In the examples below, the card data (CD) is a record containing the cardholder's name and the card's number, cvv, and expiry date.
1. The public-key cryptography method
mCD=mID+name+expiry+number+cvv
B. The payment-token method
In both schemes the merchant does not retain a permanent record of the customer's credit card data (bar its expiry). So if the merchant's system is compromised, and the attacker gains access to the list of customers, including all the data the company stores on them:
The public-key method requires the acquirer and merchant maintain a public-key infrastructure, including key distribution, that they would not otherwise. The payment-token method does not, but requires the acquirer to store payment state for each (merchant,customer) pair.
Neither method protects against the same merchant being fooled into producing unauthorised transations, but it firewalls the damage caused by a single merchant's system being compromised to that merchant.
Once thing is clear: if you're one of the few survivors, nothing more useless than an electric car. At least in the short to medium term, you're going to need a good old fashioned petrol car. You can siphon fuel from other vehicles and from the tanks of petrol stations. If this is a real apocalypse (which means almost everyone is dead) rather than a mere zombie mishap, there is essentially enough fuel for you for the rest of your life.
In the longer term (maybe a generation or two, or much less if there are more survivors) then probably a diesel car adapted to burn biodiesel is the way to go, providing you can get the feedstock for that.
But plugging your Nissan Leaf into a windmill? Only if you live beside a windmill, and don't plan on being 40 miles from it ever again.
Left4Dead2's expansion "The Passing" is due for release fairly soon; this pack will, at least for PC users, be free. So they're doing their usual thing, adding value to an existing property and stretching that long tail out further.
Valve's physical distribution is handled for them by Electronic Arts, who (along with the physical retailer) will take a sizeable chunk of the proceeds, and who (if they have any sense) will have some deal with Valve preventing them from undercutting the retail channel with their own online service, at least for a while.
Valve has been very experimental about pricing and packaging, and they've done an excellent job of pushing Steam out to a pretty wide footprint, and teasing us with those darn £5 game deals. While it's always nice to make £5 on some old game, when you'd otherwise not have sold anything, it's much nicer to take all off a full-price release, and not have to share any with a distributor, a retailer, or have to spend money on little disks and manuals and other junk. So I think it's only a matter of time before Valve try their big experiment - can they make more money selling a title only on Steam, with no physical channel at all? It's clearly where the market is going, they're in a better position than anyone to deliver, and they have two prospective titles that have people champing for the (Portal 2 and the ever elusive Half-life 2 episode 3). Both have an established userbase that's eager for more, and all those people already have Steam. And how many people in either product's market are stuck on no-internet-ever or dialup-only machines; Valve (who see the results of Steam's occasional hardware surveys, which ask just this) know, and you can bet it's a small and diminishing group. The question they have to ask, when embarking on an online-only distribution, isn't "can we sell more online", or "can we take more turnover online", but simply "can we make more money selling online - does the less profitable physical distribution just undercut sales we'd get online anyway?" Providing both games are good (and again Valve have an excellent track record) they don't have too much to lose - sell online for a month and if that doesn't pan out, cut a retail deal anyway.
Sooner or later they're going to take a punt at this, unless EA or whomever chucks a wedge of money at them to shore up a leaky business model. So that's my prediction - HL2EP3 sold in one market (let's say the US) online only at introduction, with a physical release later or not at all. It's Valve's crunch time - do you want to work for EA, or be EA?
Sucked into the Dyson vortex - Are Dysons really worth the money?
As a vacuum a Dyson is good, that is it sucks good, and doesn't suck (unlike most other vacuums, which merely suck at sucking). It doesn't clog (often) and there isn't a bag to clog. But how much does this matter? For most people, vacuuming isn't that important. I don't vacuum that often (ideally weekly) and the whole job is 10 minutes max; any more and I'd just be wearing out the carpet. Even if your house is bigger than mine, dirtier than mine, and even if you consider yourself to be a member of some cleanliness elite, you're still not doing all that much vacuuming. Does the value-add of the Dyson really justify the six to eightfold premium?I have a house full of books, and a storage locker somewhere with many more, some of which I've actually read. I love books, which means I should love book shops too. But I don't love Borders, and I won't mourn its passing.
My "local" Borders is in a hideous retail carpark with a collection of other dead-eyed boxes, where nothing ever happens and there's no reason to visit unless you really want something or truly have nothing to do. It's half an hour drive for me, and half an hour for everyone else.
The shelves tell a sorry story The "new age" section is three times the size of the "science" section (and "science" includes the lightest of popular science, all of maths and engineering, and anything DIY that looked harder than putting up shelves). "Self help" is twice the size, as is "religion" - and "religion" is barely about religion at all, but mostly self-help books that cravenly invoke religion to sell themselves ("Jesus wants to you be thin"). When there's more books about the Christian way to stop smoking than there are bibles, and no sign of Aquinas or Boethius or Luther or St.Teresa, it's another sign that they're beyond redemption (so to speak).
Eight quid for a 16 page kids book? A huge collection of 2009 calendars with newspaper cartoons about slippers? Discount books that don't compete with Bargain Books who pay a tenth what Borders do for retail space? Old DVDs with prices that seem designed to compete with Virgin Megastores (look how well they did) - £9 for Casablanca? £11 for Fast Times at Ridgemont High? £12 for some thing with Jack Black in it?
Front of house, their pride and joy, Borders' strategic reserve of unsold crap. Ghostwritten celebrity junk: all called something like "My Life" or "My Way", supposedly written by someone whose life has barely started and whose only achievement is being on some program on Channel Five where they were the outstandingly stupid person among a group of troglodytes carefully chosen by professional psychologists as being the worst possible. TV tie-ins. Celebrity recipies. Celebrity detox. Celebrity diets. Celebrity confessions. Contrite celebrity "when I got out of jail all the money was gone" sob stories. Things one guy from Top Gear likes, and things the other guy from Top Gear hates. I am Jack's raging bile duct. And I swear the prices for this junk have gone up.
People will tell you that Borders, who happily crushed the smaller bookseller when times were good, has in turn been crushed between Tesco and Amazon. This is true, but Borders went along with it. It's no surprise that Tesco can shovel junk better and cheaper than anyone, or that prime retail space isn't best used for selling off old calendars and discount books. With its convenience and range and prices, Amazon is darn hard to compete with, but Borders didn't innovate and really didn't look like they were trying. Relying on the "Grandma doesn't buy books off the internet" theory was a loser for them, because Grandma turned out to be smarter than they thought.
Still, the venture capitalists who own them will have plenty of "Chicken Soup" books to read inbetween insolvency meetings.
A while ago I started rewriting in python, still on cygwin. I really don't know why. It was worth it, I suppose, but more complicated than I'd hoped. In a final push yesterday and today I finally finished it, and cut its dependency on external utilities that cygwin provides (sed and find and stuff). So now it runs natively on Windows or Linux, which is a relief. Cygwin is fine as long as you stay inside the little cygwin box - but if you have to call in and out of it then you run into endless little filename-escaping issues.
The python version is certainly more verbose than its bash counterpart, but much easier to read (and, I hope, to maintain in future). Image scaling is now done by PIL rather than imagemagick (I just couldn't get PIL to work properly on cygwin's python). Where I used hacky sed calls to do some text substitutions I now do some really botched python regexps instead (I'm more ashamed of them that I was of the sed ones, which is saying quite a lot). The whole thing has about as many lines of code (although far fewer backslashes) and takes about the same time to run. But for exactly one line (a shell variable expansion in code called inside the macro system) it's entirely portable (damn, I'd forgotten that hideous DOS %var% syntax). I still use htmlpp to do template expansion, html tidy to fix any html snafus, and linklint to verify that there aren't any bad links. Blogs are still built in an overly manual way, and there's still no RSS syndication.
I keep wondering whether to shift the blogging support to Wordpress (which I've goofed around with, and which couldn't be easier). I guess I'd put the Wordpress content into an iframe (where the current static blog appears now) as I don't really want the entire content served from Wordpress. I'm squeamish about dynamically generated content when static would do, so I guess I'll have to figure out how to get it to produce static content. Urgh, I guess that's another hundred lines into the buildscript.
In checking through my webserver logs I've discovered that several teenager use my images for the backgrounds to their weblogs. Technically it would be better if they downloaded the images to their own website (instead of using up my paid-for bandwidth), but they seem like sweet people, so I'm not going to hassle them about it. If I was particularly nasty I could write some code that checks the referrer on requests for those images, and send visitors to their site (but not mine) something horrible like the goatse man.
It's a compliment, I guess, so it'd be churlish to complain. The thing about copyright is that if I fail to defend it, someone might claim that I've abrogated my right to do so in future. The last thing I'm going to do is send a nasty cease-and-desist letter to these folks, so here's the smart solution - the following myspace users are hereby granted licence to use images from this website on their personal weblogs:
(boy, you guys have some really overproduced websites)Principles for a new version
anon(00003292)
where 00003292 is a sequence number. Special pages (contribs, block, watchlist, etc.) work just as if this was a real name, and the user receives their own unique talk and user talk pages.Because of these limitations, it may be necessary to drop the protection afforded to anons when the anon declines cookies, and instead show their username as "anon(99.0.138.4)" - this is visible to all users (hmm, or maybe to all signed-in users) to help them track serial vandals from rolling-IP addresses.
NotesThis isn't intended to be an extra security measure, and the cookie isn't intended to be secret. No "security by obscurity" is intended, although in practice this will help track many of the more casual AOL (etc.) vandals. Its primary purpose is to make life easier for anons; although it benefits others in making it a bit easier to track some anons, it's not accurate enough to be a "perfect tracking" solution.
As an optional implementation feature, creating an account can go through a degree of verification (recognise-the-text, email) - this will mostly be a barrier to the forthcoming generation of mass vandalbots.
The exploit in question uses a signed Java applet to install malware. The signer isn't trusted, so the browser pops up a dialog saying the security certificate was issued by a company that is not trusted
. That's a major red flag to a technically-minded user, but it's meaningless
technobabble to everyone else (the majority of any browser's user population). Firefox, and other browsers in similar circumstances, is (naively) doing the right thing by showing this. But the great majority of the user population is unable to give informed consent to this nonsensical question.
Presented with these things periodically they soon realise that clicking "yes" makes things work, and "no" makes things break. Thus they become conditioned to clicking "yes" every time. They don't know what they're doing, but it's not their fault - they're being asked a question they can't possibly
understand.
The fix is easy. The default setup for the browser should be to siliently ignore attempts by such programs to work. No dialogs. No questions. No little warning icon. In order to be even asked the question, a user should have enough technical nous to be able to hunt through a menu and find an option to turn it on. And frankly, even as an "advanced" user, I want to have to turn on even being asked on a per-domain basis (although I'd like a little warning triangle in the corner when an unsigned control is being thwarted, much like the warning I get for blocked popups).
This pattern of asking for uninformed consent is getting pretty common. Personal firewalls are very prone to doing it, asking nontechnical users even more abstruse networking questions that they've no hope of understanding. It's not informed consent, and if the person can't reasonably be asked to give it, it shouldn't be requested in the first place.
This was followed by snowboarding in Lake Tahoe. I really can't snowboard, and I do it so infrequently that I forget almost everything in the two or three years between sessions. So this time I squished in a rib (opinions differ as to whether it's technically cracked; it sure hurts enough), twisted an elbow in an unfortunate (but not serious) direction, and probably did my liver irrevocable damage with all that endless apres-ski. Still, I had a lot of fun - if only Scottish ski resorts were as fancy as Squaw Valley and Heavenly. If only Scottish ski resorts had more (like any) snow, and more (like any) sunshine. Perhaps I should invent "rainboarding"?
Here are some photos:
After that back to the bay area for a couple of days, including a fancy meal at a faux-sixties place in San Francisco (oh, fine dining is wasted on me) and then off on the longest drive of my life. From Oakland to Barstow (boy, that's not a very interesting place), then a day and a half in Death Valley (sleeping in Beatty, NV, as the accomodation in the valley is unpleasantly expensive). Death Valley blew me away: I've never been anywhere so big, so bleak, and so empty. In the middle of this desolation the little desert flowers were blooming (this, apparently, is one of the best seasons for them in years). It's strange - the smell convinces you that you're at the seaside and it's just low tide. There are salty tide pools and the landform feels just like the seaside. It took more than a day before I could persuade myself that the tide really wasn't coming back in. It would be a great place for backpacking or biking (in the winter; I don't fancy being there in the summer at all). I also visited Scotty's Castle, which wasn't really very interesting.
More photos:
From Vegas I drove west to Flagstaff and then north over Coconino (at night, in a fair amount of snow) to Tusayan. The following morning I help an English guy at the gas station, who can't figure out how to pump gas. The first time I did this in the US (a long time ago now) I got similarly confused. In Britain you pump then pay, but in the US it's the other way around - which means if you're silly enough to pay in cash you have to guess how much, or overpay and then go back for change. Plus there's the pointless handle you have to lift to make the pump work (I've never found any need for that).
The vistor stuff at the south rim is way touristy, and one is left walking a tame little path with hundreds of bickering teenagers. I'm a bit underwhelmed, but that's probably entirely a function of only being at the tame top, and not having time to venture down into the canyon itself.
Photos:
Kayenta, like the rest of the Navajo Nation, is dry (alcohol wise); I wonder if the case of beer in the back of the truck will get me into trouble. Not to worry; I only see one Tribal Police vehicle in my entire time in the nation, haring down the road on some vital mission. Every layby off the road is strewn with bottles, so I guess that's what the kids do when the TV is bad.Monument Valley is big, and impressive, and certainly worth the (rather hefty) trip. Like (almost) everwhere else on this trip, I wish I had a lot more time to explore properly, and to do so on foot. Even from the tame road around the easy bit, it's an incredible place.
You get that same uneasy feeling you do in any inhabited touristy place - the people there want your money, but they really don't want you. There are signs asking you not to take photos of people's homes (or, rather, signs saying you'll have to pay to do so). I don't take any; doing so would seem to require rather too much of a condescending "oh look at the quaint little houses the poor people live in" attitude. This is the poorest place I've been in the US, and it's noteworthy how many "Support Our Troops" signs there are here; I don't know what else folks would do for work.
Here's four photos of the valley. Even after pouring over the USGS geodetic map I still can't figure out which mesa is which, so if someone can help me with that, I'd grateful:
It should be possible to figure everything out from the annotated aerial photo I submitted to Wikipedia a long time ago.I didn't go to Canyon de Chelly, largely due to not bothering to read the next few pages in my guidebook. I did briefly stray into Utah (my first time) after I missed the Monument Valley turnoff, and I'm glad (and slightly surprised) to report that I didn't instantly turn into a pillar of salt. Worse, I didn't leave time for Arizona's meteor crater (something I only noticed on the map once I'd gotten to Phoenix).
From there it's a long drive south to Flagstaff and then down into Phoenix. I didn't have much time in Phoenix (I didn't arrive until way after nightfall) so I can't say much about it. Well, other than everyone drives very fast and the highway numbering is crazy.
Now, one question I need to address is the sat-nav issue. My Dad has a TomTomGO unit, which he's very impressed with (I less so, having been sent some weird roads in southern Glasgow by it). I could have spent £100 on the US mapset for it, and I probably wouldn't have gotten lost. I did get lost, but not drastically. I can never navigate around suburban east Oakland, but the Tom Tom sometimes lets you down in such places. Flagstaff's freeway signage confused me a fair amount, and Phoenix was terrible. Vegas was easy, and every other town I was in only had one street. So I'm on the fence - I may have wasted two or three hours over the whole trip driving on the wrong road; on the upside, I didn't have to worry about whether to leave that expensive satnav unit in the car overnight.
I've you've not been, I couldn't recommend it enough. Beautiful, clean, friendly people, nice architecture, lots of interesting stuff to do. The food was great - I don't know how I'm going to go back to eating my own cooking after living on delicious, cheap, and varied tapas. Like all "ethnic" food, the less you pay for it (and the plainer the place you buy it) the better it is. We ate in one place under the Montjuic which served us each a fish, three slices of tortilla espanola each, and two (eerily British) desserts. That and a bottle of water and a bottle of Estrella beer (the latter for me, naturally), and the total cost came to eleven euros. Heck, you couldn't eat from McDonalds for that (and they barbarically don't serve beer).
How hard can a Spanish omlette really be? (I may have to consult Delia on that one, hopefully she does more stuff than just Yorkshire pudding). Naturally I took a bunch of pictures (it's so nice to be somewhere where there's enough light for cameras to actually work properly).
The lab machine, the only machine that needs to stay turned on all the time, is a pretty decent Windows XP machine. It's usually the only Windows machine in the building, and as with other things Windows its a disproportionate source of problems. It connects to the broadband connection via a USB 802.11n connection. From there, via "windows connection sharing" (Microsoft's NAT implementation) it distributes the connection to the rest of the mcwalter.org lab (via good ol' ethernet).
Last week the lab machines lost their visibility of the internet. They could still see the windows box, and it they, and it could see the internet fine. Usually one would blame the windows firewall, but even with it switched off the problem remains. The cause, it turns out, is another piece of Windows' odd world view. See, at some point I'd pulled out the USB connection to the 802.11b adapter and (in tidying cables) had plugged it back in a different USB port. Now, I'd say what uniquely (and solely) identifies a network adapter is its MAC address (which is encoded into adapter itself) - but windows cares which USB port it's plugged into. Plug the same adapter into a different port and windows thinks that it's a different connection. That's a pretty odd way of thinking, but I suppose it was something no-one had to think about before USB (ethernet and serial ports, after all, have an unbreakable link between their physical location and means by which they connect to the host pc). Windows' NAT works by designating one connection (essentially the upstream) to be the shared connection. By plugging the adapter into a different port, the shared connection wasn't present, and so there wasn't a shared connection, and so the other machines couldn't see the internet. By redesignating this new connection as the shared one I restored the other machines' ability to talk to the internet.
So, Windows is stupid. Plugging the same adapter into a different USB port should be transparent to the ethernet layer above it, never mind the IP layers above that.
But waitadarnminnit. That wireless connection is protected by WEP (heck, it's all the cheapo adapter will support). So the properties sheet for the connection stores the network key. When I connected the adapter to a different port and made a new connection, with a new connection name, the main windows machine should have lost its wireless connection. Should, but didn't. The new connection got the same key, and indeed the same gateway address and DNS setting and netmask - because windows saw they had the same adapter, and so figured out it was the same connection. Sigh. So one part of the windows networking stack realises that my new connection is the same as the old one, but another part (a part which knows far more about USB than it should) doesn't concur.
This post is adapted from an anonymous post I made on Slashdot earlier
To clarify, Neutrino is the (current) OS and QNX is the company (to confuse things, QNX used to make an x86-only OS called "QNX" or "QNX-OS", which is quite similar to, but not the same as, the multi-architecture neutrino).
I have some experience of both programming for Neutrino and some business-development work on projects aiming to deploy neutrino. I have both very positive and rather negative things to report.
On the upside, the Neutrino OS is generally excellent. It's very responsive (from a real time perspective) and the system and device APIs are nice and clean, pleasantly symmetric, and well thought out. Writing device drivers is a much more pleasant business than it is on Linux or Windows. The
microkernel stuff really isn't visible to a user, but it makes the low-level developer's life a deal easier. There's a great satisfaction to recompiling a video driver, slay
ing the current instance and executing a fresh one, and have the whole thing work without a reboot.
Photon is okay. It's fast but rather old-fashioned, and its C API is crufty and rather a pain to code in. It's rather thin on higher-functionality widgets and one has to do more heavy lifting when implementing one's own widgets that I'm used to. It doesn't have a more modern graphics API (like
GDI+/quartz/java2d) and that's a bit limiting when one intends to use it for TV/Video stuff (settop boxes etc.); again, I can do it myself, but it's more heavy lifting than I'd expect on other OSes. Support for audio and media is so-so, and I don't believe there's any 3d support. None of this is a
problem if you view Neutrino as a high-end embedded OS (as opposed to a desktop OS) but even there - I'd rather not implement a nice post-Tivo setop UI or a high-end incar navigation system on Neutrino - it's all doable, but its rather too much work. Photon is clearly architected for speed and
real-timey-ness (it's single threaded, like Swing, but being in C one doesn't have access to some nice little things that make Swing programming more tractable, like invokeLater
), rather at the expense of programmer friendliness. One has to ask, however, if it's really worth the time
of the idle user learning photon, and the low number of free and open source (and heck, commercial) programs using it shows that most developers haven't learned it. There is an X-server for neutrino, but I really don't know anything about it or the degree of toolkit support on it.
The real problem with Neutrino is (or was, maybe things will change under the new regime) QNX (the company) and their business model for selling neutrino. It's not that they're dumb or mean guys, but things conspire to make the independent developer's life (and particularly the free/open-source developer's life) discouraging. Here's some of the problems I faced:
So Neutrino is pretty good in its little embedded/control space. It has great potential to be much more, but I can't see how it'll get out of its current space. Just as Linux benefits from a virtuous circle of support, features, and acceptance, Neutrino suffers from a vicious one. Why code for it when there's so few users and the tools and docs are second-rate? Why improve the tools and docs when there are so few programmers. Why try to help expand the neutrino community when QNX aren't really terribly motivated to help you.
It's been clear for a while that QNX (the company) have been under financial pressure. I guess they're getting squeezed on one side by VxWorks and on the other by Linux. I'd hoped they'd find a way out of their niche (perhaps by open-sourcing some core stuff, perhaps some other means) but their being a remote division of a speaker company surely won't make that likely.
I conclusion: nice (core) technology, but a business model that hasn't kept pace with the times.
Update (years later): For context, "Audrey" in the post above is the old 3Com Audrey internet appliance.
They're all of flowers, taken in the garden of a friend of mine. You know, I'm really not particularly a fan of flowers in general, but if you want to take a beautiful photo quickly and cheaply, it's really hard to get a better and more convenient source that some little flower growing in an unused patch of land.
Also, by popular demand (not least from me) I've chopped the photos page into sub-pages, sorted "thematically". So, while there's still an overall index page, things are also grouped into more easily downloadable sections.
Wiki is great for the task for which it was intended. It's great for collaborative editing of text. To date it's pretty much useless for collaborative editing of anything else (there's no reason once couldn't have a graphics wiki or a midi wiki, for example, but no-one has written the
tools to do this).
Wiki isn't great for conversation, isn't great for process (which is workflow-managemant, I suppose) and really isn't good at all for voting.
Items need to be created, edited, checked, cleared, approved, and published. This isn't so much of a issue for an edit-forever public wiki like Wikitravel or Memory Alpha, but wikis which produce works in a finite time (such as those in professional organisations) may find the discussion of work and the actual work done itself dislocated. Resolution of discussions and concomitant changes in the work items must be done manually.
To be fair, wikis weren't really built with workflow management and automation in mind, and don't have support for it anywhere, not just in the discussion pages.
In the meantime, I've moved mcwalter.org and mcwalter.net from their prior home at plugsocket.com. There's nothing particularly wrong with Plugsocket (other than they're maybe rather overpriced) but they don't offer java or python and their config panel is the aging Sun Cobalt thing (which is fine, but horribly limited). So we're moving (or have moved, hopefully) to javaservlethosting.com. The DNS update has taken here, so I see the new version, and it seems to work fine, and at least seems to be a bit faster than before.
In a fit of getting stuff done, I've added some background images that have been sitting around waiting to be put up for ages. There's the three:
The cherry trees are in blossom now, and the japanese maple tree seems to have grown a whole new set of foliage virtually overnight. So for this month or so my little part of Scotland looks, to some extent at least, like Kyoto.
Of the usual myriad of sub-mapplethorpian flower photos I've only added the following three:
It's becoming clear that there's too many flower photos, in particular that the twenty or so crocus ones really are too similar to one another to justify their all being there. I'll have to thin them out somehow, but how does one choose between one's children?More exitingly (?), I've also made a new section, presenting images suitable for you to download and use as the background on your PC. Frankly, they're not really all that good, so I doubt they'll actually compete with that one of Halle Berry's boobs that you're already using, but I can life in hope. Take a look a the backgrounds section. Your feedback, including technical feedback, is most welcome.
I've taken a bunch of my trademark (hackneyed?) low-level photos of daffodils growing in on the lawn of the HQ of Central Scotland Police. I'm quite glad they didn't take ill to my standing around on their lawn taking photos, all too many of which have images of their (rather unattractive) buildings, antennas etc.
The two photos I've added to the image page are:
If it's not clear from the photos what they were doing (yes, it really isn't) the workers had long (maybe three metres) wires with some kind of fuel-filled rags tied to the far end. They walked back and forward across the field, leaving a (rather unimpressive, frankly) trail of fire behind them. If the flames appear at all scary in the shots then that's entirely a function of my being about six inches away, with the camera right down on the ground.
But spammers are professional, they're technical (or rather, those who write spamware are), and they have a strong impetus (profit) to continue in their line of work. Their goal isn't to send email, per se - the want user impressions, and they don't really care how they get them. Hitherto
email and newsnet spam has been the easiest way, and so they've mostly stayed there. With email becoming a less hospitable environment, they're going to move elsewhere. They already are.
BBC news reports "spammers are targeting blogs". Equally, instant messenger and chat spam is now commonplace. This isn't a diversion - this will be the new battleground for junk postings. It'll follow the same trajectory as it did for mail, and it will be just as hard to stop.
Any website which allows users (anonymous or registered) to make changes to the website which other visitors can see will be used. Some examples include:
Spam filters put a darwinian pressure on spam and spammers, either to adapt to more effective spam, or to move to a new area where it's easier to operate. Similarly, spam in new venues will put pressure on software for communities, chat, wikis or blogs to adapt or be swamped (as email has) to the verge of being useless. In the meantime, get ready for a web that sucks (in places) as much as email does now.
This photograph, which I took on a horribly hot and humid day this summer. It shows the lower summit of the hill called Dumyat, which overlooks Stirling in Scotland. Down there in the bluisish murky haze one can just see the Abbey Craig, which is topped by the monument to William Wallace.
That purple stuff is patches of heather. It really does grow in that weird pattern, and it really is that colour (infact, it's far more vibrant, but my camera is rather too poor at capturing colour to properly display what it really looks like).
This is probably going to be the last image update for a while (unless I dredge something up from my rather extensive image archive). I have to confess to having broken my trusty Canon digital camera (note to self: don't leave expensive things on the roof of the car). It actually doesn't look too badly damaged (and perhaps needs just some kind of camera-orthopedics), but it'll need fixed.
For the geogeeks among you, it's on the Sheriffmuir above Stirling in Scotland, pretty close to Dunblane. The view is from Sheriffmuir road looking roughly North West, over the valley to the Trossachs. Right beside the cellphone mast, and a large and depressingly modern farm (farmers just don't want to pander to the rustic stereotypes of we jaded urbanites). It's also surrounded by some blackface sheep, which manged to perpetually be in the way and almost never in a cute photographic pose - the only decent photo I have of any of them is a "L.A. style" driveby photo, with only half of a rather angry looking ewe in it.
I have at least a hundred photos of the same area, all taken on the same evening. It's perhaps the curse of using a camera with automatic exposure control that each picture either shows the amazing sky or shows details of the dark landscape. This falls from the latter catagory - the sky was infact infinitely more grand that it appears here.
Sorry, mountain fans, I've really no idea which mountains these are - they're somewhere on the road between Aviemore and Pitlochry (but then, there's lots of mountains between those two places, so that probably doesn't help much). Again, this is a product of the same holiday as yesterday's seal pictures.
Aviemore has a rather bad reputation in Scotland (and particularly in the acid pages of certain guidebooks), as it's long been a concrete ski dormitory. It's really not that bad (having, apparently, been tidied up over the past few years). It's still trying desperately to be a "proper" skitown, but it's got a long way to go before it's Aspen or Zermatt, although it feels just a teensy bit like Tahoe City (which isn't all that impressive, but still). Frankly, after endless one-street Scottish towns and villages which feature only a drab tearoom and a "heritage centre" (note: "heritage centre" appears to be code for "shop"), Aviemore is a breath of fresh air.
Some of our time was spent at the Scottish Sea Life Sanctuary on Loch Creran near Oban. It's a great place ; don't let that ugly-ass website fool you - they're much better at fish and seals and stuff than they are at website design.
The seal pups there messed around with us terribly, hanging around looking as cute as possible and then zooming off into the murk as we fumbled vainly for our cameras. I did manage to capture a couple of decent images (and over a hundred rotten ones). I've uploaded the decent ones in the
'animals' section of the photo page (warning: nauseating cuteness ahead).
Apart from the legal problems associated with the format, GIF is technically outmoded, lacks decent colour imaging, has no alpha support, and features rather poor compression. Ideally it would have been entirely superceeded now either by the vastly superior raster formats PNG and MNG or (for maps, diagrams and other such "drawn" things) by the scalable vector graphics format. PNG support is essentially universal in modern browsers, and both MNG and SVG are bubbling under nicely.
While the burn all gifs page flatly recommends "by switching your site to PNG, you encourage users to upgrade to PNG-capable browsers", this rather stern prescription leaves all too many innocents in its wake - there's still enough users who're stuck (for technical or institutional reasons) in GIF-only browsers. Wise web developers (and me too) believe, rather, in making pages that "degrade gracefully". This leaves web page developers with something of a dilemma - either stick with the antique format yet again, or leave the poor innocents who can't upgrade with a broken website. Whoever said HTML was portable?
It's possible (although tricky and rather non-portable) to pick up image information from a CSS stylesheet, and it's possible (although tricky, very non-portable, and frequently disabled) to use DHTML (i.e. JavaScript and some DOM other other) to alter images according to browser capabilities. Lastly one can dodge the client issue altogether and generate the appropriate webpage on the fly in the webserver - but using expensive server resources is a rather expensive and troublesome solution, particularly to a problem that should be trivial to solve on the client.
What we really need is a fix to the markup that defines images, which presently looks something like this:
<img src="foo.gif"
width=100
height=20>
If browsers supported image markup that allowed us to define a number of images (with some ordered notion of preference) and let the browser load and display the best one it is able to. So a first-cut at this might look something like this:
<img src="foo.gif"
width=100
height=20
newsrc="foo.png">
This second version is better, as it allows newer browsers to support enhanced functionality (a newer file format, in this case) without messing things up for folks stuck in antique browser land. We have to be careful, however, not to just defer the problem into the future - if (as is bound to happen) a third (fourth, fifth...) format becomes available in a few years - do we either condemn future web designers again to the same hard choice as above?
Naively, one could fix the problem incrementally (for some amazing new 2009 era file format "FUF"):
<img
src="foo.gif" width=100 height=20
newsrc="foo.png"
newnewsrc="foo.fuf">
Naturally, the above isn't really much of a solution for anything. It's altogether better to solve the problem in one go, by specifying a list of available files, and letting the browser pick the first one (scanning left-to-right in the list) that it can support:
<img src="foo.gif"
width=100
height=20
newsrc="foo.fuf|image/fuf#foo.png|image/png">
Note that I cheated a bit above, by putting the mime types (image/fuf etc.) into the (pseudo)URI (which is naughty, to say the least) but it means the client doesn't have to parse the filename (which is itself naughty) or load the file (which is inexcusable) just to figure out it can't support that format.
This approach (or something more general, ideally with better thought-out syntax) also solves that perennial browser-author issue "what's the point in our supporting it before IE does". That seems to be a significant part of the "do we include MNG" argument that the Mozilla folks are going through right now (see mozilla bug 18574).
Better yet, the page degrades gracefully, so it should be much more accessible for visitors with screen readers and text-only browsers.
I really wanted to change from using a table layout to an entirely CSS layout. Written properly, a CSS layed out page can degrade much better than the usual table-based solution that's the current state of the internet. This would provide a simple and usable page for disabled users and those using limited access devices (like TV set-tops and cellphones). The villain of the piece is, as always, Microsoft. IE6's handling of percentages in CSS layouts differs from the other modern browsers (Netscape/Mozilla, Konqueror/Safari, and Opera). I'm being generous here - really IE6's CSS layout code is broken. I'd love to either not cater for visitors for IE6, or at least give them a crappy experience, but the grim fact of the matter is that around 90% of my visitors (poor misguided innocents that they are) still use either IE6 or or (jeepers) IE5.5.
So I have a plea for moderately technical visitors who're still running Microsoft Internet Explorer - web authors and sad disabled children the world over beg you to try another browser.
For mac users, try Safari or Mozilla.
For windows users, try Mozilla or Opera
Better yet, all of these browsers are free to download and they're all better than Internet Explorer, faster than Internet Explorer, and more standards compliant than Internet Explorer. Users of these browsers also have a lowered rate of gonadic atrophication. Honest.
[ perhaps I should explain: "mozilla" is exactly the same as the Netscape browser, just with Netscape's advertising stuff removed ]
It was previously hosted by Yahoo! in the sunny Silicon Valley. For both technical and logistical reasons I've moved it to Plugsocket Internet, so this page is now being served from the dank East Anglian fen in the low eastern part of southern England. If you remember that "dead marshes" place in the second Lord of the Rings movie? East Anglia's just like that.
Perhaps it's my imagination, but things seem to run much faster from the new location. If you're looking for a decent low-cost web hosting solution, and you don't need the handholding that providers like Yahoo! provide, I can heartily recommend Plugsocket.
The idea
As with other things marked "stupid idea", I'm not really serious about this, so don't send me angry emails saying its a wicked idea. I know.
The current range of countermeasures employed by rangers to discourage this is varied, and increasingly ineffective, not least because bears are so smart that they can soon figure out their own countermeasures to the ranger's schemes (who says cartoons aren't just like real life?). The rangers tactic of last resort is to transport the bears to areas with fewer campers. Sometimes the bears come back.
This idea proposes to exploit the bear's natural intelligence rather than trying to work around it. The bears need to be discouraged from molesting vehicles and camping equipment - once this is achieved the mother bears will teach their cubs to avoid campers (just as now they presently teach them quite the opposite). So, how to you discourage a giant, nearly invulnerable, omnivore - and do so in a way that their pretty smart brains won't easily figure a way round?
The technology for this already exists, and can be adapted to anti-bear applications with a minimum of effort. Police, para-military and military forces around the world regularly use non-lethal stun grenades (also known as flashbangs) which generate an exceptionally loud explosion and simultaneously a brilliant, blinding flash of light. The intention is to render terrorists and other miscreats temporarily insensate, allowing the user of the flashbang to overpower his opponent without resorting to lethal force. The effect of a flashbang detonation on a bear's acute sensory system will likely be equally profound, and bears are very likely to avoid repeated exposure, or the chance thereof.
So, it should be possible to place a flashbang in a specially prepared picnic-basket, tent, vehicle or other human-related setting. The booby-trapped device can be labeled with a warning sign, so that campers who chance upon the site don't inadvertently set it off themselves (surely the only way bears can figure out a way around this would happen when they learn to read - at which point the bastards can quit lounging around in the forest and can get proper jobs like the rest of us have to). The trap can optionally be baited with some particularly enticing foodstuffs, the smell of which will attract any bear bold enough. When the bear tries to open the boobytrapped item, the flashbang would detonate, giving the bear a terrible shock and leaving it disoriented and probably nauseous. Any bear unfortunate enough to encounter this on a couple of occasions will soon associate human food with great discomfort, and will soon return to their pastoral lives eating nuts and berries or whatever.
Why this isn't such a good idea
But it's not just me
Park rangers and other woodsman-types are trying to figure out any number of ways to discourage bears from foraging for human food. At one time the recommendation was to spray pepper-spray around ones encampment. People did this for a time, until some researchers observed bears that seemed to enjoy the pepper-sprayed area. In retrospect, using the pepper spray turned out to be about as smart as walking naked through the woods smeared with honey.