debian tag cloud

Sponsored by
HostEurope Logo

debian tag cloud

All 1 2 3 4 5 6 7 8

MirWarm

07.07.2013 by tg@
Tags: debian fun

zz̼̐z

Time for more neighbours’ cat posts, apparently. It’s warm, so the cat’s sleeping outside. Not disturbed by much.

Me envious. Too warm to go to the ice salon (bike’s in repair, car’s hot enough to boil eggs on it, public transport not better).

Waypoint Statistics

08.06.2013 by tg@

mirabilos’ found waypoints

I’ve finally gotten around to listing all Waypoints (Geocaches, Opencaches, Closedcaches, Earthcaches, Terracaches including Locationless, Navicaches, etc.) I’ve found a box, enjoyful, educating, a good place to hide one myself, etc. and putting up a list and, of course, generate my own statpic.

I’ll put them up for the other project members, too (already made a picture for gecko2@ but bsiegert@ still needs one; we also need to collect offline lists of found, owned and attended waypoints)…

A bit of background story: I decided, years ago, to have an offline list of cache finds in case something would happen. Just, I had found way too many already, so this was a huge bit of work. Oh well… I of course procrastinated, and then something did happen (Opencaching wanting to force a Restricted Commons licence; me disagreeing and suggesting a change; some trigger-happy person immediately deleting my account without waiting for the discussion or the decision period to end; weeks of forum discussions; Opencaching allowing dual-licencing; them telling me they can’t restore my data – probably never heard of databa…sorry, MySQL backups). And I still didn’t have the list. Now I do; recreated even the OC information from what was still accessible and with help from one OC supporter (“mic@”, thanks); merged caches that are co-listed on several platforms, etc. (still need to put in the FTF/STF/TTF/4TF/LTF and voting/favourites information) and a statpic, all in Open Source and Open Data, in cvs(1) with mksh(1) and… a… frontend for libgd2 I admit, but we had been using that for the MirWebsite for a while already.

I suggest every geocacher keep an offline or local record of all their finds (and hides and attended logs) for things like this, in case some platform decides to… let’s say, “put your data into the cloud… where it is? I don’t know”.

DynDNS

20.05.2013 by tg@
Tags: archaeology debian

Apparently (hi Zhenech, found on Plänet Debian), a Man does not only need to fork a child, plant a tree, etc. in their life but also write a DynDNS service. Perfect for opening a new tag in the wlog called archæology (pagetable.com – Some Assembly Required is also a nice example for these).

Once upon a time, I used SixXS’ heartbeat protocol client for updating the Legacy IP (known as “IPv4” earlier) endpoint address of my tunnel at home (My ISP offers static v4 for some payment now, luckily). Their client sucked, so I wrote on in ksh, naturally.

And because mksh(1) is such nice a language to program in (although, I only really begun becoming proficient in Korn Shell in 2005-2006 or so, thus please take those scripts with a grain of salt, I’d do them much differently nowadays) I also wrote a heartbeat server implementation. In Shell.

The heartbeat server supports different backends (per client), and to date I’ve run backends providing DynDNS (automatically disabling the RR if the client goes offline), an IP (IPv6) tunnel of my own (basically the same setup SixXS has, without knowing theirs), rdate(8) based time offset monitoring for ntpd(8), and an eMail forwarding service (as one must not run an MTA on dynamic IP) with it; some of these even in parallel.

Not all of it is documented, but I’ve written up most things in CVS. There also were some issues (mostly to do with killing sleep(1)ing subprocesses not working right), so it occasionally hung, but very rarely. Running it under the supervise of DJB dæmontools was nice, as I was already using djbdns, since I do not understand the BIND zone file format and do not consider MySQL a database (and did not even like databases at all, back then). For DynDNS, the heartbeat server’s backend simply updated the zone file (by either adding or updating or deleting the line for the client) then running tinydns-data, then rsync’ing it to the djbdns server primary and secondaries, then running zonenotify so the BIND secondaries get a NOTIFY to update their zones (so I never had to bother much with the SOA values, only allow AXFR). That’s a really KISS setup ☺

Anyway. This is archæology. The scripts are there, feel free to use them, hack on them, take them as examples… even submit back patches if you want. I’ll even answer questions, to some degree, in IRC. But that’s it. I urge people to go use a decent ISP, even if the bandwidth is smaller. To paraphrase a coworker after he cancelled his cable based internet access (I think at Un*tym*dia) before the 2-week trial period was even over: rather have slow but reliable internet at Netc*logne than “that”. People, vote with your purse!

mksh R45 released

26.04.2013 by tg@

The MirBSD Korn Shell R45 has been released today, and R44 has been named the new stable/bugfix-only series. (That’s version 45.1, not 0.45, dear Homebrew/MacOSX packagers.)

Packagers rejoice: the -DMKSH_GCC55009 dance is no longer needed, and even the run-time check for integer division is gone. Why? Because I realised one cannot use signed integers in C, at all, and rewrote the mksh(1) arithmetics code to use unsigned integers only. Special thanks to the people from musl libc and, to some lesser amount, Natureshadow for providing me with ideas what algorithms to replace some functionality with (signed shell arithmetic is, of course, still usable, it is just emulated using unsigned C integers now).

The following entertainment…

	tg@blau:~ $ echo foo >/bar\ baz
	/bin/mksh: can't create /bar baz: Permission denied
	1|tg@blau:~ $ doch
	tg@blau:~ $ cat /bar\ baz
	foo
 

… was provided by Tonnerre Lombard; like Swedish, German has got a number of words that cannot be expressed in English so I feel not up to the task of explaining this to people who don’t know the German word “doch”, just rest assured it calls the last input line (be careful, this is literally a line, so don’t use backslash-newline sequences) using sudo(8).

Since a while…

I am a proud
EarthCache Master

On the other hand… I should probably put up my own, local, list of found caches, considering what happened to me on “Open”caching. And maybe write intros for people new to geocaching, since it’d be virtually no work now had I done it initially. (And for fanfiction readers! I wish I’d kept a list of read fics, not just of these I currently read and/or are currently unfinished.)

GNU autotools generated files

20.02.2013 by tg@
Tags: debian rant

On Planet Debian, Vincent Bernat wrote:

The drawback of this approach is that if you rebuild configure from the released tarball, you don’t have the git tree and the version will be a date. Just don’t do that.

Excuse me‽

This is totally inacceptable. Regenerating files like aclocal.m4 and Makefile.in (for automake), configure (for autoconf), and the likes is one of the absolute duties of a software package. Things will break sooner or later if people do not do that. Additionally, generated files must be remakable from the distfile, so do not break this!

May I suggest, constructively, an alternative? (People – rightfully, I must admit – complain I’m “just” ranting too much.)
When making a release from git, write the “git describe” output into a file. Then, use that file instead of trying to run the git executable if .git/. is not a directory (“test -d .git/.”). Do not call git, because, in packages, it’s either not installed or/and also undesired.

Couldn’t comment on your blog, but felt strongly enough about this I took the effort of writing a full post of my own.

(But thanks for the book recommendation.)

PSA: Referring to Unicode codepoints.

If your Unicode codepoint is, numerically, between 0 and 65533, inclusive, convert it to hexadecimal and zero-pad it to four nibbles. For example, the Euro sign € is Unicode codepoint #8364 which is 20AC hex; the Eszett ß is 223 which is DF hex, padded 00DF.
Then write an uppercase ‘U’, a plus sign ‘+’, and the four nibbles: U+20AC U+00DF
In mksh, JSON, etc. it’s a backslash ‘\’, a lower-case ‘u’ and four nibbles.

Otherwise, your Unicode codepoint will be, numerically, between 65536 and 1114111, inclusive, that is hex 10000 to 10FFFF. (There’s nothing on 65534 and 65535, nor above these figures.) In this case, convert it to hex, zero-pad it to eight nibbles and write it as an uppercase ‘U’, a hyphen-minus ‘-’ and the eight nibbles. In C-like escapes for environments supporting the Unicode SMP, that’s a backslash ‘\’, an upper-case ‘U’ and eight nibbles. Do not, in either case, use less (or more) hex digits than specified here. For example, there’s a famous Unicode codepoint U-0001F4A9 “PILE OF POO”. That’s not the same as U+1F4A9. The latter reads as U+1F4A “GREEK CAPITAL LETTER OMICRON WITH PSILI AND VARIA” and a digit 9 (Ὂ9). Be educated.

Since this wlog runs on MirBSD, which limits itself to the Unicode BMP voluntarily, and as nōn-BMP is not widespread anyway, I cannot reproduce the “PILE OF POO” here, but you can just duckduckgo it.

Let’s start a convention: bare-metal machines have the linguistic male gender („der Computer“, he needs to be rebooted), whereas VMs have the linguistic female gender („die virtuelle Maschine“, she runs better since the last upgrade of Linux-KVM), and neutral linguistic gender is used when you cannot or do not want or need to make such distinction.
This is, of course, entirely unrelated to human gender, but not unrelated to #debian-68k (on OFTC) discussions ;-)

ObRant: DO NOT USE xz COMPRESSION LEVELS ABOVE 6! (For -7 we can make exceptions, for example in Debian *-dbg or *-source packages.) You may use -e if you absolutely need the better compression, but please think of the poor sods who have to create the archives. You must not use the highest compression levels -8 or -9 since they have absolutely insane memory requirements on compression and will still hinder machines with less RAM on decompression. (Using -e only affects CPU usage at compression time; decompression is exactly as fast and memory-consuming as without.) Furthermore, DO NOT CHOOSE A COMPRESSION LEVEL WITH A DICTIONARY SIZE MUCH LARGER THAN THE DATA TO COMPRESS, as that makes absolutely no sense and will rather worsen than improve compression. As a reminder, xz uses the following dictionary sizes:

  • 256 KiB at -0 (compresses better than gzip(1) and faster than either gzip(1) or bzip2)
  • 1 MiB at -1
  • 2 MiB at -2 (compresses better than gzip(1) and bzip2 without losing much speed)
  • 4 MiB at -3 and -4 (the difference is in the match finder between these two levels)
  • 8 MiB at -5 and -6
  • 16 MiB at -7 (186 MiB RAM used to compress a file)
  • 32 MiB at -8 (370 MiB RAM used to compress a file)
  • 64 MiB at -9 (674 MiB RAM used to compress a file)

Decompression uses less than 1 MiB more than the dictionary size, but the dictionary must always be allocated wholly. (You’re fine to use custom presets, but mind the RAM usage!) As a general rule, if you have something of up to 20 MiB to compress, -4 is fine, and -5 will only be better if you have similar data spread across the whole of the file instead of close to each other. When I make mksh distfiles, I instead put files close to each other that have related content, which improves compression much more nicely without penalising low-memory systems; for example, you could put documentation, Makefiles, scripts, m4(1) files, and C source code into groups before archiving, instead of doing it alphabetically.

Another note on bzip2: its decompression is slow. I see no reason to use it any more, at all. Use gzip(1) if you care for compatibility or have an issue with xz not having a free copyright licence, and xz otherwise.

mksh made quite some waves (machine translation of the third article) recently. Let’s state it’s not just Amigas – ara5 is a buildd running the Atari kernel, an emulated though. On the other hand, the bare-metal Ataris used to be the fastest buildds, so I expect we get them back online soonish. I’m currently fighting with some buildd software bugfixes, but once they’re in, we will make more of them. Oh, and porterboxen! Does anyone want to host a VM with a porterbox? Requirements: wheezy host system (can be emulated), 1 GiB RAM, one CPU core with about 6500 BogoMIPS or more (so the emulated system has decent speed; an AMD Phenom II X4 3.2 GHz does just fine). Oh, and mksh is ported to more and more platforms, like 386BSD 0.0 with GCC 1.39, and QNX 4 with Watcom… and more bugfixes are also being worked on. And let’s not forget features!

jupp got refreshed: it’s got a bracketed paste mode, which is even auto-enabled on xterm-xfree86 (though the xterm(1) in MirBSD’s a tad too old to know it; will update that later, just imported sendmail(8) 8.14.6 and lynx(1) 2.8.8dev.15 into base, more to come) and will be enhanced later (should disable auto-indent, wordwrap, status line updates, and possibly more), lots of new functions and bindings, now uses mkstemp(3) to create backup files race-free, and more (read the NEWS file).

In MirBSD, Benny and I just added a number of errnos, mostly for SUSv4 compliance and being able to compile more software from pkgsrc® without needing to patch. This is being tested right now (although I should probably go out and watch fireworks in less than a half-hour), together with the new imports and the bunch of small fixes we accumulate (even though most development in MirBSD is currently in mksh(1) and similar doesn’t mean that all is, or worse, we were dead, which we aren’t). I’ll publish a new snapshot some time in January. The Grml 2012.12 also contains a pretty up-to-date MirBSD, with a boot(8/i386)loader that now ignores GUID partition table entries when deciding what to use for the ‘a’ slice.

If you haven’t already done so, read Benjamin Mako Hill’s writings!

Der heilige… Frieden?

15.12.2012 by tg@
Tags: debian politics

(Apologies for putting this on Planet Debian, but it says the one or other non-English post is okay as long as it’s an exception. I feel I need to reach more people with this, but don’t feel like translating this into English right now.)
Update: Tanguy asked for a short English summary: it’s me ranting against the rioting against muslims and the call for more CCTV surveillance after a possible bomb was found at the train station.

In Bonn herrscht immer noch „Bombenstimmung“, wenn man z.B. auf die Webseite der Lokalzeitung schaut – von dem Amoklauf in Connecticut, über den sich im IRC gewunder wird, ist immer noch nichts zu sehen, dafür wird fleißig wider „Islamisten“ gehetzt.

Ich finde das besorgniserregend, muß doch jetzt jeder Angehörige des Islams fürchten, verfolgt oder benachteiligt zu werden. Das reizt doch erst recht zum Gegenschlag, bei dem dann auch Menschen, die absolut nicht mit der hier vorherrschenden Meinung und Politik übereinstimmen, getroffen werden können.

Ich persönlich habe kein Problem mit Menschen anderen Glaubens oder anderer Weltanschauung, solange wir friedlich miteinander leben können. Ich teile eure Unzufriedenheit mit dem herrschenden Staat, der immer weitergehenden Überwachung, Unterdrückung von Leuten, die nicht dem vorherrschenden Menschenbild entsprechen (egal an welchen Kategoriën), und bitte die, die dies lesen, nochmal nachzudenken, bevor sie etwas tun, was hinterher Unschuldige trifft oder gar in „friendly fire“ ausartet.

Hat eigentlich wer die in Bad Godesberg ausgegebenen Koran-Bücher sich mal angeschaut? Als ich davon las, war ich ja zugegebenermaßen neugierig, weil ich vom Koran leider eher wenig kenne, weiß aber nicht, wie neutral oder eben nicht die Übersetzung gehalten ist. Anhand dessen, was ich bereits mitbekam, sollte das eher friedlicher sein als was durch spätere Theologen festgelegt wurde – wie ja auch zum Beispiel im Christentum, aber über die Horrorepisoden der christlichen Kirche will ich jetzt auch nicht mich auslassen, in der Hoffnung, daß auch diese sich mit den Jahren gebessert hat. (Ist nur halt das Problem mit den Leuten, die die „alten Hetzparolen“ jetzt noch verbreiten. Ist wie im Netz mit den Groupies von Theo de Raadt, die noch asiger zu Leuten sind als er selber.) (Außerdem muß man ja befürchten, durch Besitz eines Korans schon vorverurteilt zu werden heutzutage *seufz*… ich finde das nicht gut!)

Update (ich vergaß): auch der Ruf nach mehr Videoüberwachung ist nur Panikmache. Das geht nur zu Lasten des Normalbürgers. Vielleicht lassen sich noch Kleinstdelikte wie Taschendiebstahl damit abschrecken, aber gerade diese Bomben und dergleichen sind doch oft von Leuten, die vor Konsequenzen keine Angst haben, organisiert. Die werden dann maximal Märtyrer. Ich wiederhole nochmal für die Politiker und die ganz langsamen unter den Lesern: Überwachung verhindert keine Straftat.

Update 11.01.2013: Mittlerweile hat auch Fefe was dazu.

Before we begin, everyone should read up on hashtables and what open addressing / closed hashing is. The context is lines 111‥190 of Python’s Objects/dictobject.c as of today (so we get the line numbers straight).

(I’ve reworded this wlog entry a bit; I originally wrote it too late at night for it to read coherent.) Basically, I’ve got an application where I’d like to use a hashtable for a number of things – not as generic as Python, and with focus on small footprint. I’d like to offer associative arrays in a scripting language, where the keys are always arbitrary byte strings excluding NUL. Also, I’d like to use the hashtable as backend for indexed arrays, where the keys are uint32_t and the usual use case is sequential. Finally, I’m using it for several internal tables, such as a list of keywords, one of builtins, one of special variables, etc. which is a reason for me to not use a self-balancing binary tree as data structure (reading further below might suggest that, but getting a sorted list of hashtable keys is not the focus, though not unimportant).
My questions on this are:

① Why is the shift on perturb done after its first use? In my experiments (using 32-bit width everywhere), for the pathological case of an 8-element (i = 3) table with three entries 0, 0x40000000 and 0x800000000, the “second round” yields 1 for all three, so it cannot have to do with the upper bits. My lookup looks like:

	mask = 2ⁱ - 1;
	j = perturb = hash(key);
	goto find_first_slot;

	 find_next_slot:
	j = (j << 2) + j + perturb + 1;
	perturb >>= PERTURB_SHIFT;
	/* FALLTHROUGH */

	 find_first_slot:
	entry = table[j & mask];
	if (!match(entry)) goto find_next_empty_slot;
 

This means that my first check is always the bare hash (so “only do it if needed” is no reason) and, since I’m using gotos, I could just move the perturb >>= PERTURB_SHIFT; line before the line recalculating the next j to use. This seems to make more sense, even in the face of Python. (I actually looked at the Python file’s comments again today because I thought to use a different resolution, but they have a good rationale for using the multiplication by 5.)

② Why can’t we just use i as the PERTURB_SHIFT? Sure, this changes a shift-right by a constant, which can possibly be encoded as immediate value in assembly (unless you’re on a pre-80186, which can only do SHR AX,1 and SHR AX,CL but not SHR AX,4, but that’s outside of mksh’s scope) into a right-shift by a variable, but i is already known, and I think the behaviour is better (it wouldn’t eat any bits; assume the same 8-entry hashtable and pathologic keys 0, 8 and 16). Again: who do I think I am to go against the wisdom of the Python people, who seem to have shed more thought on this than everyone else I saw, asked, read about (including Spammipedia). That’s why I’m asking here. On that reference: I don’t support spammers or people nagging for donations or premium accounts, like Xing and Groundspeak/Geocaching.COM, at all. In fact, I urge others to do the same, so it really hurts them; it may be their business model, but not if they spam me. Besides, OpenCaching.DE exists.

Another thing is: to avoid CVE-2011-4815, I’m randomising the hash used, with one “seed” value per hashtable, changed before a resize operation. I originally thought to seed it with nonzero, but then I have to rehash on hashtable resize, so I’ll be XORing the final hash value instead (thanks ciruZ for the idea). I’m thinking of omitting that for indexed arrays, as an attacker almost certainly cannot determine the keys there. (To directly use the indexed array keys, which are already uint32_t, as hashes makes using i from ② even more important.) The hash I’m using is a modified Jenkins one-at-a-time called NZAAT: it’s my new generic standard nōn-cryptographic hash, and the changes are thus: while adding a byte, another increment of the hash is done (so NUL counts), and the finaliser got prefixed with the shift-left-add+shift-right-xor sequence of the adder (but not adding any value or the +1), to get best avalanche for all bytes. I actually compiled several versions of Hash.cs on a Windows® VM at work to analyse the original one-at-a-time and all of my modifications; these turned out to be the simplest ones (I originally had added 0x100 instead of 1, but the effect was the same, and +1 is usually cheaper on most CPUs).

Also, to avoid people being able to get to the seed, a user will always get only a sorted list of hashtable keys (numeric for indexed arrays, ASCIIbetically otherwise; see also my thoughts on JSON from the previous wlog entry). What algorithm do I use? For strings, comparisons are much more expensive, so I’d like to keep them low. Memory use is also a factor; allocating one large(r) block is better than many small ones due to the pool allocator overhead and due to portability to ancient Unicēs (which is another reason for me to use a hashtable which is a small struct plus an array of pointers, and then pass the list of keys as array of string pointers, instead of a tree). For both reasons, I’m thinking a relatively simple MergeSort: I need to allocate the result array anyway, so I can just get two and free the one that isn’t the end result, and it’s AFAICT the cheapest on comparisons other than Tree Sort (which nobody really seems to use, and which would effect to using a balanced binary tree again). Since keys are unique, stability and duplicate handling is never an issue. I’d like to use only one algorithm and one data structure, not a combination, as compactness is a design goal.

Please drop your thoughts on Freenode, e.g. by /msg MemoServ send mirabilos your text here or per eMail to the domains debian, freewrt or mirbsd, which are organisations, with the localpart tg. Or just contact me as usual, if you’re already acquainted. Or lookup 0xE99007E0. Thanks in advance! (Especially, Python Developers’ thoughts are welcome.)

The following proposal extends the JSON specification, with the idea of using JSON as an information interchange format, rather than just a way of writing certain ECMAscript values. They do not add anything but only restrict valid JSON content and encoders with some rationale.

First of, I’d like to remind everyone, including JSON’s author, that JSON is case-sensitive, except in the four hexdigits after a backslash-u sequence in a String.

Second, I’d like to remind everyone that JSON is not binary-safe. No way around that, it implements Unicode (actually, 16-bit UCS-2, and it doesn’t guarantee that UTF-16 surrogates are correctly paired) text. I also consider only UTF-{8,{16,32}{B,L}E} valid encodings for JSON. (No PDP endian, either. Sorry, guys.)

For my first proposal, I’d like to point out CVE-2011-4815 which was about overflowing hashtables. The obvious fix is to randomise the hash per hashtable; to ensure this doesn’t leak, we sort ASCIIbetically the keys of an Object in the encoder. (Using Unicode is good here – we can just sort the keys as UTF-8 strings by their uint8_t value or as Unicode (UCS-2 or even UCS-4 or UTF-16) strings by the codepoints.) JSON was never preserving the order of elements in an Object anyway so we make it standardised (we still accept any order, and, when parsing, in collision cases, the later value wins). This also helps diffs.

For my second proposal, I’d like to forbid \u0000, \uFFFE, \uFFFF in strings. The first because many implementations use C strings, and for an information interchange format this is better; it also has security implications to allow NUL in a string. The other two, but not unpaired UTF-16 surrogates (as ECMAscript uses UCS-2 and got UTF-16 only later) because they’re not valid Unicode; JSON was not binary-safe already so why bother. Among other benefits, this also helps implementations.

For my third proposal, I’d like to agree that implementations should impose a nesting depth limit that may be user-defined, and in the face of which, cyclic checking may be ignored by an encoder. I emit nesting depth overflows as literalnull; might also throw an error. Since I was asked, the common “standard” value is to restrict nesting depth is 32, unless the user specified one. (I also saw 8, but 32 WFM pretty well.) Most seem to use it even if it may seem low at first. Only specialised applications probably need more, and they can always pass a value.

For my forth proposal, backslash-escape U+007F‥U+009F always. It may upset humans, editors, databases, etc. (This paragraph is newly added, after some IRC discussion.)

All these do not permit anything that wasn’t accepted to be accepted afterwards. I’ve got a fifth proposal which changes acceptance rules – but only for a subset of parsers: formally JSON is defined in ECMA-262 as industry standard that, in contrast to RFC 4627, always allowed any Value as top-level element of a JSON text. I’d like to make it so, and ignore the RFC’s requirement for it to be an Object or Array. Even so, the first two characters (after the BOM, if any) of a JSON text always are in the non-NUL 7-bit ASCII range, allowing for encoding detection. (This is done by the NUL octet pattern in the first four octets.)

JSON has only taken off because it’s a tightly defined simple format that can be used “everywhere” and isn’t too awful for humans (escaping not needed for U+0020‥U+D7FF and U+E000‥U+FFFD after all, although I’d also take the C1 control characters out, see my forth proposal above). I’ve started to use a trailing comma in indexed and associative arrays in code I write at work, when the array values are one a line, to help version control systems to do their diffs, but refrain from asking for a JSON extension to permit that in order to not endanger compatibility any (no comment needed, it’s just not worth it), but I’d like my above proposals to be followed by implementators (and I’m one of them).

Some more discussion with Jonathan pointed out that JSON5 allows for trailing commata in Object and Array; IMHO the only feature of it that is not bad or outright harmful. I’ll probably keep from accepting them because, on their own, they’re not that useful, and I usually would run JSON texts, even configs, through a parser/encoder roundtrip to pretty-print them which would lose them anyway.

As for binary-safeness: probably best to just use base64 and let the outer layers worry about compression. The data is usually unrelated to the JSON-encoded structure, and even if it’s related to other data the base64 representation is usually similar (unless misaligned).

Update 02.12.2012 – Wrong I was about the first two characters: “"€"” is a valid JSON text. Still possible to peek at four octets and determine the encoding by ordering the tests; updated my notes.

All 1 2 3 4 5 6 7 8

MirOS Logo