I’ve finally gotten around to listing all Waypoints (Geocaches, Opencaches, Closedcaches, Earthcaches, Terracaches including Locationless, Navicaches, etc.) I’ve found a box, enjoyful, educating, a good place to hide one myself, etc. and putting up a list and, of course, generate my own statpic.
I’ll put them up for the other project members, too (already made a picture for gecko2@ but bsiegert@ still needs one; we also need to collect offline lists of found, owned and attended waypoints)…
A bit of background story: I decided, years ago, to have an offline list of cache finds in case something would happen. Just, I had found way too many already, so this was a huge bit of work. Oh well… I of course procrastinated, and then something did happen (Opencaching wanting to force a Restricted Commons licence; me disagreeing and suggesting a change; some trigger-happy person immediately deleting my account without waiting for the discussion or the decision period to end; weeks of forum discussions; Opencaching allowing dual-licencing; them telling me they can’t restore my data – probably never heard of databa…sorry, MySQL backups). And I still didn’t have the list. Now I do; recreated even the OC information from what was still accessible and with help from one OC supporter (“mic@”, thanks); merged caches that are co-listed on several platforms, etc. (still need to put in the FTF/STF/TTF/4TF/LTF and voting/favourites information) and a statpic, all in Open Source and Open Data, in cvs(1) with mksh(1) and… a… frontend for libgd2 I admit, but we had been using that for the MirWebsite for a while already.
I suggest every geocacher keep an offline or local record of all their finds (and hides and attended logs) for things like this, in case some platform decides to… let’s say, “put your data into the cloud… where it is? I don’t know”.
Apparently (hi Zhenech, found on Plänet Debian), a Man does not only need to fork a child, plant a tree, etc. in their life but also write a DynDNS service. Perfect for opening a new tag in the wlog called archæology (pagetable.com – Some Assembly Required is also a nice example for these).
Once upon a time, I used SixXS’ heartbeat protocol client for updating the Legacy IP (known as “IPv4” earlier) endpoint address of my tunnel at home (My ISP offers static v4 for some payment now, luckily). Their client sucked, so I wrote on in ksh, naturally.
And because mksh(1) is such nice a language to program in (although, I only really begun becoming proficient in Korn Shell in 2005-2006 or so, thus please take those scripts with a grain of salt, I’d do them much differently nowadays) I also wrote a heartbeat server implementation. In Shell.
The heartbeat server supports different backends (per client), and to date I’ve run backends providing DynDNS (automatically disabling the RR if the client goes offline), an IP (IPv6) tunnel of my own (basically the same setup SixXS has, without knowing theirs), rdate(8) based time offset monitoring for ntpd(8), and an eMail forwarding service (as one must not run an MTA on dynamic IP) with it; some of these even in parallel.
Not all of it is documented, but I’ve written up most things in CVS. There also were some issues (mostly to do with killing sleep(1)ing subprocesses not working right), so it occasionally hung, but very rarely. Running it under the supervise of DJB dæmontools was nice, as I was already using djbdns, since I do not understand the BIND zone file format and do not consider MySQL a database (and did not even like databases at all, back then). For DynDNS, the heartbeat server’s backend simply updated the zone file (by either adding or updating or deleting the line for the client) then running tinydns-data, then rsync’ing it to the djbdns server primary and secondaries, then running zonenotify so the BIND secondaries get a NOTIFY to update their zones (so I never had to bother much with the SOA values, only allow AXFR). That’s a really KISS setup ☺
Anyway. This is archæology. The scripts are there, feel free to use them, hack on them, take them as examples… even submit back patches if you want. I’ll even answer questions, to some degree, in IRC. But that’s it. I urge people to go use a decent ISP, even if the bandwidth is smaller. To paraphrase a coworker after he cancelled his cable based internet access (I think at Un*tym*dia) before the 2-week trial period was even over: rather have slow but reliable internet at Netc*logne than “that”. People, vote with your purse!
The MirBSD Korn Shell R45 has been released today, and R44 has been named the new stable/bugfix-only series. (That’s version 45.1, not 0.45, dear Homebrew/MacOSX packagers.)
Packagers rejoice: the -DMKSH_GCC55009 dance is no longer needed, and even the run-time check for integer division is gone. Why? Because I realised one cannot use signed integers in C, at all, and rewrote the mksh(1) arithmetics code to use unsigned integers only. Special thanks to the people from musl libc and, to some lesser amount, Natureshadow for providing me with ideas what algorithms to replace some functionality with (signed shell arithmetic is, of course, still usable, it is just emulated using unsigned C integers now).
The following entertainment…
tg@blau:~ $ echo foo >/bar\ baz /bin/mksh: can't create /bar baz: Permission denied 1|tg@blau:~ $ doch tg@blau:~ $ cat /bar\ baz foo
… was provided by Tonnerre Lombard; like Swedish, German has got a number of words that cannot be expressed in English so I feel not up to the task of explaining this to people who don’t know the German word “doch”, just rest assured it calls the last input line (be careful, this is literally a line, so don’t use backslash-newline sequences) using sudo(8).
I uploaded a full bulk build of binary packages for MirBSD/i386 corresponding to the pkgsrc-2013Q1 release. About 7,000 binary packages are available in this build, including the pkgin package manager that makes installing binary packages as easy as apt.
Since a while…
|I am a proud|
On the other hand… I should probably put up my own, local, list of found caches, considering what happened to me on “Open”caching. And maybe write intros for people new to geocaching, since it’d be virtually no work now had I done it initially. (And for fanfiction readers! I wish I’d kept a list of read fics, not just of these I currently read and/or are currently unfinished.)
On Saturday March 23, this year's pkgsrc conference (pkgsrccon 2013) took place in Berlin. Julian Fagir organized it with unending energy, even though pkgsrc is not the primary focus of his NetBSD work. He just took matters in his hands because no one else stepped forward. A big thanks for that!
The flight from Zurich to Berlin was uneventful. It was my first flight to TXL airport (I normally arrive at SXF), and arriving there is incredibly quick and convenient compared to the latter. The terminal is very small, and it takes just five minutes to go from the plane to a bus to the city.
Now for the conference itself: we started at 12pm on Saturday with a program of talks but no fixed schedule. Due to this, the conference took a long time (we finished only at 9pm or so) but on the other hand, it allowed for lots of interesting and fruitful discussion. At no point did we have to cut a question short because of a lack of time. Overall, I think that this was an excellent choice and made the conference more useful and productive.
We were about 21 people – mostly pkgsrc developers (of course) but also a Debian Developer (Ralf Treinen, who presented his work on Mancoosi), a FreeBSD dev and some interested users. I won't give an exhaustive recollection of all talks here but simply comment on a few ones that I found particularly interesting.
The most important theme of the conference was virtualization and cloud computing. Jonathan Perkin and Filip Hajny gave a talk about their company's product, SmartOS, and how it uses pkgsrc. SmartOS is a "cloud OS" based on OpenSolaris. It boots from a read-only medium (such as a CD) into a lean system that only does the administration of all the zones that it runs. All useful work happens in zones, which are a sort of lightweight VM solution specific to OpenSolaris. The zone images include access to a very complete set of pkgsrc packages for things such as a compiler. They can also run other OSes (NetBSD!) by setting up a zone that runs KVM. Joyent runs a large public cloud with SmartOS, where customers purchase virtual machines by the hour. This is similar to Amazon EC2 but with a focus on high performance.
Hubert Feyrer gave another talk about a similar theme. He described the use of Ansible for provisioning and setting up VMs. Ansible can automatically create VMs on EC2, gather the necessary information (such as the IP address) and do various setup tasks without further user interaction. This was all very impressive, even though the live demo failed. This was for two reasons: Somebody deleted the sudo package for amd64 from the NetBSD ftp server (boo), and the i386 VM failed to come up, the kernel paniced on startup. Joerg speculated that this was due to _some_ machines in their DC not having PAE enabled, while the i386 kernel uses PAE. This was interesting, as I had noticed the very same problem when I set up the netbsd-386-bsiegert continuous builder for Go.
Amitai Schlair alias Schmonz came out in a passionate defense of the venerable pkglint. He put the source on github and started refactoring the code and adding tests. He calls this approach TED for "Test Eventually Development" ;) and advocated a similar approach for the pkgsrc infrastructure: Every time a developer takes five minutes to understand a part of the infrastructure (when making a change, for instance), he or she should write a test for it. This is a very pragmatic and doable approach, in my opinion, and we should all do this.
I gave a slightly amended version of the "Go on NetBSD" talk I had given at FOSDEM 2013. There were a lot of valuable questions and discussion, both about the language and about how to package software written in it.
Aleksej Saushev ended the day with a talk that was not in the program about the Google Code-In and the problems that developers and particularly new contributors face. If pkgsrc can get more contributors, it gets more fixes, which in turn makes it more useful to users. More usefulness leads to more users, leading to more contributors. We should do more to get into this virtuous circle. There are about five different mechanisms to build and/or deploy packages in pkgsrc: build directly with "make package", pkg_chk, pkg_comp, the old bulk build scripts and pbulk. The basic frustration that should be overcome is the following: you want to upgrade a set of packages, the old ones are removed, new ones are rebuilt, and the build fails. Rolling back is difficult in general. pbulk could be a valuable solution to this, but its standard config is heavily tailored for a different use case, and its _two_ separate pieces of documentation are contradictory, incomplete and confusing. So the talk contained a call for action to fix those minor annoyances and generally document things better, which makes it easier for everybody.
My take-home message – and my next project idea – is the following: each time that I do a MirBSD bulk build using pbulk, I have to do a lot of painful steps to set up the right build environment on all my machines. This time, I will try to automate this process with Ansible, making up the recipes as I go along, and then (more importantly) publish these recipes for others to use and to share.
Natürlich mit MirKaffee (enthält Milch, Kakao, Kaffee, Rohrzucker)!
I’ve been doing too much lately, which has led to reduced performance and enjoyment. Also I’ve not been able to work the full hours of my dayjob, reducing what I had on my overtime account. I’ll be taking a step back and try to un-load. This is my notice, I’m not explicit on where, and I’m not cancelling anything special (not even those mentioned in the next paragraphs).
I’m disappointed with Google/Nianticproject Ingress. It’s frustrating (nothing lasts; also read this posting), buggy, battery-draining, sometimes too time-consuming (especially with only GPRS) and I don’t get warm with the Android 2.3 based Cyanogenmod on the borrowed device. Using it without a big screen device having the Intel map next to you is futile. I could go into detail but won’t. I won’t stop playing, as it’s a good excuse to go outside and combines somewhat with geocaching (unless you’re trying to actually play Ingress, in which case you’ll just be walking/cycling/driving between portals at maximum speed). And there’s that connection with Liferay…
Fun is important in securing volunteer work; bugs and other random happenings (example) can drain the fun.
To end on a positive note, I’m absolutely, totally happy with mksh user and distributor feedback, including the bug reports and feature requests, how well almost all people deal with feature rejection, and the speed of integration of mksh(1) updates lately. The only thing I’m unhappy wrt. mksh is my own lack of speed regarding implementing the cool new things I’ve been, as an mksh user, waiting for because I want and even need them for some cool programs written in mksh I would love to write, so I can use them.
I’ve got roughly 350 mails in my INBOX (all read, but most of them being action items; some due… before this weekend, evilly enough, the one I’m thinking of is GnuPG/MIME encrypted, which means extra effort to read it). Just so you know. (And a couple of other things that really could use some fixing, which I can, in theory, do. And lots of requests for spending real life time with.)
I’m still reachable via eMail and IRC (mostly), will respond, will try to persuade my employer to send me to CLT 2013 next month… just, don’t deadline me right now. I’m not taking a VACation either (though I probably should, had I money).
On Planet Debian, Vincent Bernat wrote:
This is totally inacceptable. Regenerating files like aclocal.m4 and Makefile.in (for automake), configure (for autoconf), and the likes is one of the absolute duties of a software package. Things will break sooner or later if people do not do that. Additionally, generated files must be remakable from the distfile, so do not break this!
May I suggest, constructively, an alternative? (People – rightfully,
I must admit – complain I’m “just” ranting too much.)
When making a release from git, write the “git describe” output into a file. Then, use that file instead of trying to run the git executable if .git/. is not a directory (“test -d .git/.”). Do not call git, because, in packages, it’s either not installed or/and also undesired.
Couldn’t comment on your blog, but felt strongly enough about this I took the effort of writing a full post of my own.
(But thanks for the book recommendation.)
git log -n 1 --all --full-history --pretty=format:'%cD'
This should™ scan all branches, take the chronologically last commit and output its committer date. Still doesn’t take into account git-receive-pack times, but we can just look at the mtime of the firstname.lastname@example.org mailing list for that.
PSA: Referring to Unicode codepoints.
If your Unicode codepoint is, numerically, between 0 and 65533,
inclusive, convert it to hexadecimal and zero-pad it to four nibbles.
For example, the Euro sign € is Unicode codepoint #8364 which is 20AC
hex; the Eszett ß is 223 which is DF hex, padded 00DF.
Then write an uppercase ‘U’, a plus sign ‘+’, and the four nibbles: U+20AC U+00DF
In mksh, JSON, etc. it’s a backslash ‘\’, a lower-case ‘u’ and four nibbles.
Otherwise, your Unicode codepoint will be, numerically, between 65536 and 1114111, inclusive, that is hex 10000 to 10FFFF. (There’s nothing on 65534 and 65535, nor above these figures.) In this case, convert it to hex, zero-pad it to eight nibbles and write it as an uppercase ‘U’, a hyphen-minus ‘-’ and the eight nibbles. In C-like escapes for environments supporting the Unicode SMP, that’s a backslash ‘\’, an upper-case ‘U’ and eight nibbles. Do not, in either case, use less (or more) hex digits than specified here. For example, there’s a famous Unicode codepoint U-0001F4A9 “PILE OF POO”. That’s not the same as U+1F4A9. The latter reads as U+1F4A “GREEK CAPITAL LETTER OMICRON WITH PSILI AND VARIA” and a digit 9 (Ὂ9). Be educated.
Since this wlog runs on MirBSD, which limits itself to the Unicode BMP voluntarily, and as nōn-BMP is not widespread anyway, I cannot reproduce the “PILE OF POO” here, but you can just duckduckgo it.
Let’s start a convention: bare-metal machines have the linguistic male
gender („der Computer“, he needs to be
rebooted), whereas VMs have the linguistic female gender („die virtuelle Maschine“, she runs better
since the last upgrade of Linux-KVM), and neutral linguistic gender is
used when you cannot or do not want or need to make such distinction.
This is, of course, entirely unrelated to human gender, but not unrelated to #debian-68k (on OFTC) discussions ;-)
ObRant: DO NOT USE xz COMPRESSION LEVELS ABOVE 6! (For -7 we can make exceptions, for example in Debian *-dbg or *-source packages.) You may use -e if you absolutely need the better compression, but please think of the poor sods who have to create the archives. You must not use the highest compression levels -8 or -9 since they have absolutely insane memory requirements on compression and will still hinder machines with less RAM on decompression. (Using -e only affects CPU usage at compression time; decompression is exactly as fast and memory-consuming as without.) Furthermore, DO NOT CHOOSE A COMPRESSION LEVEL WITH A DICTIONARY SIZE MUCH LARGER THAN THE DATA TO COMPRESS, as that makes absolutely no sense and will rather worsen than improve compression. As a reminder, xz uses the following dictionary sizes:
- 256 KiB at -0 (compresses better than gzip(1) and faster than either gzip(1) or bzip2)
- 1 MiB at -1
- 2 MiB at -2 (compresses better than gzip(1) and bzip2 without losing much speed)
- 4 MiB at -3 and -4 (the difference is in the match finder between these two levels)
- 8 MiB at -5 and -6
- 16 MiB at -7 (186 MiB RAM used to compress a file)
- 32 MiB at -8 (370 MiB RAM used to compress a file)
- 64 MiB at -9 (674 MiB RAM used to compress a file)
Decompression uses less than 1 MiB more than the dictionary size, but the dictionary must always be allocated wholly. (You’re fine to use custom presets, but mind the RAM usage!) As a general rule, if you have something of up to 20 MiB to compress, -4 is fine, and -5 will only be better if you have similar data spread across the whole of the file instead of close to each other. When I make mksh distfiles, I instead put files close to each other that have related content, which improves compression much more nicely without penalising low-memory systems; for example, you could put documentation, Makefiles, scripts, m4(1) files, and C source code into groups before archiving, instead of doing it alphabetically.
Another note on bzip2: its decompression is slow. I see no reason to use it any more, at all. Use gzip(1) if you care for compatibility or have an issue with xz not having a free copyright licence, and xz otherwise.