So, as per my normal proclivities, I have read all available documentation and hereby declare myself an expert on the subject.
Looking at the press on S3, particularly looking at the smugmug success story, I'd got the impression that S3 was more than a large pay-per-use filesystem, which it's not. It is, however, a very, very well thought-out large-scale decentralized high-availability filesystem, which is not to be sniffed at.
However, I don't understand why SSL is (or is intended to be) mandatory for SOAP requests, whereas it's not (as far as I can tell) for the REST API. I'm sure there's logic to the design choice, but anyway.
My other comment is that, given the ability to can attach metadata to objects, it's kind've a shame you can't query on that metadata, even at a relatively simple level. This would be great for e.g. the purposes of tagging, although sundry other use cases present themselves, if you think of the potential of storing stuctured, as opposed to opaque, data.
Despite the above gruntles, Amazon are still closer than any other player to reaching a true commoditization of the basic computing resources: CPU time, storage space, and bandwidth. As Bezos says, there's an awful lot of friction involved in setting up a web service. Anything that can reduce that interests me.
2006/12/30
2006/12/14
ISS, BXL, XMPP, SNAFU
- Watching ISS debug their solar arrays in real-time is not something I would have thought possible for normal mortals ten years ago. What's also funny is that all they really need is for someone to poke at it with a stick, but it's in space, so...
- Cycling in Brussels, Belgium: tram rails, being essentially slippery-when-wet metal-lined grooves in the road, approximately the same width as a thin bike tyre, must not be navigated at ~<30>
- I am amiss in not mentioning that xmpp4moz has pretty much done everything I dreamt of doing re: adding an XMPP engine inside FF, and is an impressive piece of work. Preliminary fiddling shows that binding new native functions to the top-level window object, or adding new types of events, is Not Simple. Considerable coolness has already taken place.
- Were I not so busy at work, I would dive into this head first.
2006/07/22
E4X and the DOM: another take on conversion, more reasons to "grr".
So, here's me writing my cool graphing thing (if this were in sillyvalley, it could easily be half the IP assets of a startup and the key to USD1M funding :-p), when, looking for real-world e4x examples I find an interesting blog post by ecmanaut linking to this email by Mor Roses, chiefly concerning the lack of E4X<->DOM, e.g. (to take Johan's example) bemoaning the impossibility of
Me, I'm 100% in agreement that this kind of thing is a definite oversight, but when one considers that the DOM API is governed by the W3C while E4X and ECMAScript are both ECMA standards, one could charitably understand why ECMA didn't want to tread on a fellow standards body's toes. Or, more cynically, promote the use of competing specs.
Anyway. This becomes one more hurdle at the presentation level, which I can live with. After having written DOM-driven SAX event drivers and DOM-constructing SAX event handlers, not even touching XMLPull etc., why not add another to the mix? Bring the noise.
Which brings me to my next point, which is that I can rarely resist using JS code that I haven't fiddled with. The functions in the above links are functionally perfect, but I don't get why the XMLNS is hard-coded and not passed as an argument, since FF can grok SVG and MathML now. Conversely, the mime type of "text/xml" is a static constant, when, for the purposes of the parser, it will never change. Additionally, having a Java background means I instinctively namespace my code, meaning I don't have to assign anything to the function properties.
My purely aesthetic revision:
HAVING SAID THAT,
A second consequence of the unwanted duality is (joy!) complete lack of access to DOM-specific functions. In HTML this isn't such a big deal, but in SVG when you construct text elements, the getComputedTextLength() method is invaluable and is often used during the construction of the element, i.e. in my case when it's still in E4X and not in DOM.
So this is what I'm left with:
Which is a nasty hack, to put it nicely. No code that touches the real world will ever be perfect.
Addendum: the above snippet doesn't work. The text has to be rendered before FF can give me a length. Foo.
node.appendChild( <img src="url" />
).Me, I'm 100% in agreement that this kind of thing is a definite oversight, but when one considers that the DOM API is governed by the W3C while E4X and ECMAScript are both ECMA standards, one could charitably understand why ECMA didn't want to tread on a fellow standards body's toes. Or, more cynically, promote the use of competing specs.
Anyway. This becomes one more hurdle at the presentation level, which I can live with. After having written DOM-driven SAX event drivers and DOM-constructing SAX event handlers, not even touching XMLPull etc., why not add another to the mix? Bring the noise.
Which brings me to my next point, which is that I can rarely resist using JS code that I haven't fiddled with. The functions in the above links are functionally perfect, but I don't get why the XMLNS is hard-coded and not passed as an argument, since FF can grok SVG and MathML now. Conversely, the mime type of "text/xml" is a static constant, when, for the purposes of the parser, it will never change. Additionally, having a Java background means I instinctively namespace my code, meaning I don't have to assign anything to the function properties.
My purely aesthetic revision:
lib.util.e4x2dom = function(xmlObj, xmlObjNs, doc) {Here's hoping that blogspot doesn't mangle my escaped & formatted characters.
if(!doc) doc = document;
if(!xmlObjNs) xmlObjNs = NS_HTML;
var xmlRootObj = <root xmlns={xmlObjNs} />;
if(!lib.util.e4x2dom.parser) lib.util.e4x2dom.parser = new DOMParser(); //one time initialisation
xmlRootObj.firstChild=xmlObj;
var domTree=lib.util.e4x2dom.parser.parseFromString(xmlRootObj.toXMLString(), "text/xml");
var importMe=domTree.documentElement.firstChild;
while(importMe && importMe.nodeType!=1) {
importMe=importMe.nextSibling;
}
return (importMe) ? doc.importNode(importMe,true):null;
}
HAVING SAID THAT,
A second consequence of the unwanted duality is (joy!) complete lack of access to DOM-specific functions. In HTML this isn't such a big deal, but in SVG when you construct text elements, the getComputedTextLength() method is invaluable and is often used during the construction of the element, i.e. in my case when it's still in E4X and not in DOM.
So this is what I'm left with:
var text = "foo bar";
var textDOM = document.createElement("text");
textDOM.appendChild(document.createTextNode(text));
var textE4X = <text y="0" x={textDOM.getComputedTextLength()}>{text}</text>;
Which is a nasty hack, to put it nicely. No code that touches the real world will ever be perfect.
Addendum: the above snippet doesn't work. The text has to be rendered before FF can give me a length. Foo.
2006/05/18
The rules of crunch time, v2.0.1alpha
(Although they're actually considered more as guidelines.)
So, it has been a year and a half since I last went ridiculously overboard as geeks are wont to do, and sacrificed sleep and sanity for a dream. In the end I went to SF anyway, and the whole thing was Most Definitely Worth It.
Now, fast-forward, and I'm doing it again, professionally this time, meaning that it's not my Big Idea that I'm working on - it's Mr. Architect who has the visions - but it is my team, and to Serve with Comrades in the hope of a Bright Shining Future is the lifeblood of all (knowledge-) workers, n'est pas? Вся власть народу трудовому! etc.
Ahem.
Given that some time has passed, I feel it necessary to add and/or emphasize certain elements of the previous list, based on experience:
Crunch mode doesn't work for more than a couple of months, but for short periods of time we do it anyway, because once the visceral fear of missing a real Dead Line takes hold, people tend to go into overdrive as an evolved response to high-pressure environments.
One of the biggest symptoms of increased work pressure is sacrifices made in other parts of one's life - relationships, interests, etc. This is not good because these things are all psychological support mechanisms that you need to leverage if you want to be able to push yourself to extremes.
This is why energy is so important: not only are you working harder than normal, you should also be relaxing harder than normal as well. Dependent or abusive mechanisms will simply make things worse in this situation. High pressure at work in turn puts pressure on everything else in your life, and everything has to withstand.
Thankfully I have great friends, a great family, and a Deity (to help with debugging :-), so with their love and understanding I'm dealing with everything fine. So far...
So, it has been a year and a half since I last went ridiculously overboard as geeks are wont to do, and sacrificed sleep and sanity for a dream. In the end I went to SF anyway, and the whole thing was Most Definitely Worth It.
Now, fast-forward, and I'm doing it again, professionally this time, meaning that it's not my Big Idea that I'm working on - it's Mr. Architect who has the visions - but it is my team, and to Serve with Comrades in the hope of a Bright Shining Future is the lifeblood of all (knowledge-) workers, n'est pas? Вся власть народу трудовому! etc.
Ahem.
Given that some time has passed, I feel it necessary to add and/or emphasize certain elements of the previous list, based on experience:
- I said that Exhaustion drastically affects quality of work and this is true, but it also affects morale. This is why having great people in your team is important.
- Food and coffee, both still important, but food more than ever. Something that deserves emphasis is that lots of fresh fruit and vegetables makes an incredible difference to state of mind. This stuff is healthy, dagnamnit, so no more McQuick!
- Still true
- Still true
- Sleep: cutting out sleep is bad, of course, but sleep patterns and schedules can be modified to tune for maximum productivity. One of the biggest gains is to be had by going to bed early (like I'm not, today) and waking up early too. Getting to work at closer to 07:00 than 09:00 gives you an appreciable chunk of highly productive time.
- Still true.
- This point - support mechanisms - deserves to be expanded much more.
Crunch mode doesn't work for more than a couple of months, but for short periods of time we do it anyway, because once the visceral fear of missing a real Dead Line takes hold, people tend to go into overdrive as an evolved response to high-pressure environments.
One of the biggest symptoms of increased work pressure is sacrifices made in other parts of one's life - relationships, interests, etc. This is not good because these things are all psychological support mechanisms that you need to leverage if you want to be able to push yourself to extremes.
This is why energy is so important: not only are you working harder than normal, you should also be relaxing harder than normal as well. Dependent or abusive mechanisms will simply make things worse in this situation. High pressure at work in turn puts pressure on everything else in your life, and everything has to withstand.
Thankfully I have great friends, a great family, and a Deity (to help with debugging :-), so with their love and understanding I'm dealing with everything fine. So far...
2006/04/22
Skypephone / mouse combo:
This mousephone from Sony (Japan) is a step towards what I've posted about on different occasions, which is basically that handsets are becoming more of a true TCB than smartcards or Palms ever will.
What's added by this is that the clamshell design is obviously a great way to hide the "phone" nature of the chunk of plastic when it's not being used in that manner. Smart thinking. Apple, your iPhone should do this.
What's added by this is that the clamshell design is obviously a great way to hide the "phone" nature of the chunk of plastic when it's not being used in that manner. Smart thinking. Apple, your iPhone should do this.
2006/03/12
"open source music"
Various OSS people get the idea now and again of creating "open source" music distribution channels with the aim of disintermediating the big four. This is commendable, but the strategic aim is more likely to be accomplished by other channels such as CDBaby and iTunes. For history's sake, I should note that the idea was mooted much more heavily in the post-Napster era, before Apple did their thing.
What makes these new channels so dangerous to the entrenched model is ironically the very feature that model has evolved to perpetuate itself: the artificial hit mentality. There only needs to be one artist that achieves the success of Coldplay or Jack Johnson while remaining totally outside the system, and the perceived monopoly power of the big four disappears.
But this is not the aim of my post. People often characterise the "open source music" model as one that exhibits the same features as the open source world:
It should also be noted that the above attributes are not even the most important outputs of the OSS process, nor are they uniformly present among OSS projects. The most important output is the source: the preferred form for working on the project. The source is the most important output of an OSS project and can also be the input for it's own project and others.
Taking this into account, what is open source music? A piece of open source music comes with:
Now, I've spoken to artists about this, and the two things they all came back with are (a) it'd be great to have, (b) nobody will ever do it because of the loss of control over their creative output. And sadly, they're probably right on both points. I think there's a bit that needs flipping in the collective unconscious for this to ever happen on an appreciable scale.
The point: open source music is not the same as free music. A whole bunch of stuff goes into creating music that we as 'consumers' never see, and this has to hit the 'net along with the finished /.mp3/.flac for any piece of music to qualify as being open source.
What makes these new channels so dangerous to the entrenched model is ironically the very feature that model has evolved to perpetuate itself: the artificial hit mentality. There only needs to be one artist that achieves the success of Coldplay or Jack Johnson while remaining totally outside the system, and the perceived monopoly power of the big four disappears.
But this is not the aim of my post. People often characterise the "open source music" model as one that exhibits the same features as the open source world:
- no dollar-cost barrier to entry/use
- gift culture among producers
- community among consumers
- Freedom, if you're an FSF guy.
It should also be noted that the above attributes are not even the most important outputs of the OSS process, nor are they uniformly present among OSS projects. The most important output is the source: the preferred form for working on the project. The source is the most important output of an OSS project and can also be the input for it's own project and others.
Taking this into account, what is open source music? A piece of open source music comes with:
- the finished product
- the sheet music if necessary, in a notation such as LilyPond
- all the samples used, if any
- all the tracks (vocal, beat, etc.) as separate files
- details of configurations used to produce any effects
- etc.
Now, I've spoken to artists about this, and the two things they all came back with are (a) it'd be great to have, (b) nobody will ever do it because of the loss of control over their creative output. And sadly, they're probably right on both points. I think there's a bit that needs flipping in the collective unconscious for this to ever happen on an appreciable scale.
The point: open source music is not the same as free music. A whole bunch of stuff goes into creating music that we as 'consumers' never see, and this has to hit the 'net along with the finished /.mp3/.flac for any piece of music to qualify as being open source.
2006/02/27
More frontend ranting.
Firstly and most importantly, this is absolutely mindblowing, and I WANT ONE, Apple take note! etc. This hit the 'sphere about ten years ago but I also want to note it here.
Secondly, I have said many times before and I will say it again, defining page navigation as traversal within a directed state graph is a short, weak substitute for using a powerful language able to express arbitrarily complex logic, i.e. a programming language equipped with continuations, such as Ruby, Lisp, or Java.
Fourthly, most of what is written in a modern frontend should not be. We have page->BOM and BOM-> page inference/generation, smart binding code and ActiveRecord to light the way. XML config syntaxes and convoluted frameworks are not the answer. Java framework developers have no excuse for making their end-users write XML with the arrival of annotations in 1.5.
A good solution makes the simple things easy and the complex things possible. Too often, we make the simple things stupid and the complex things idiotic.
Secondly, I have said many times before and I will say it again, defining page navigation as traversal within a directed state graph is a short, weak substitute for using a powerful language able to express arbitrarily complex logic, i.e. a programming language equipped with continuations, such as Ruby, Lisp, or Java.
Fourthly, most of what is written in a modern frontend should not be. We have page->BOM and BOM-> page inference/generation, smart binding code and ActiveRecord to light the way. XML config syntaxes and convoluted frameworks are not the answer. Java framework developers have no excuse for making their end-users write XML with the arrival of annotations in 1.5.
A good solution makes the simple things easy and the complex things possible. Too often, we make the simple things stupid and the complex things idiotic.
2006/01/27
2006/01/17
Idempotent event queue in JavaScript for use with Behaviour, etc.
[caveat: my original copy now does more, i.e. copies arguments and uses "this" properly. Will paste in later.]
So anyway, there's me using my (modified) copy of the very groovy Behaviour to extract all the JavaScript from my nice clean semantic HTML(or JSP (*cough* bleh)), and wondering how I ever lived with the mish-mash I just rescued.
Generally speaking, Behaviour rulesets are applied on page load, (hence Behaviour.addLoadEvent), whereupon all the selectors in all the rulesets are evaluated and all the events are assigned and added.
However, whenever the page changes (as a result of the Prototype Ajax.Updater, for example), the standard way of ensuring that the rules still apply is to call Behaviour.apply() again, which re-assigns all the events to all the elements etc.
BUT, we have two issues here. Fit the first: suppose one element matches two selectors. If we assign our events in the normal way that works 90% of the time, as follows (to take a simple example):
The standard answer to this is simply to use addEventListener (FF/W3C) or attachEvent(MS). This solves our first problem. A second possible answer is to adapt Simon Willison's addLoadEvent code to enable the addition of multiple functions to arbitrary events, as follows:
HOWEVER, when adding events using addEventListener, attachEvent, or addEvent, subsequent calls to Behaviour.apply() will re-add new events to the same elements, with the result that events added via Behaviour will fire once for each re-application of the given ruleset, i.e. potentially far too many times.
The code that demonstrates this is as follows:
My solution was to write my own event queue for which the "add" function was idempotent. This means that adding the same function three times (as above) has no effect on the second and third time. (More properly, an idempotent operation is one in which one or many executions is functionally identical.)
This requires the ability to compare functions, and luckily the toString() of a Function object returns the source of that function - the ideal way to find identical chunks of code.
So, without further ado:
Enjoy. I think a nice touch is the ability to list the events attached to an element by calling (e.g.) window.onload.list(); but this is generally debug stuff.
I should sleep.
So anyway, there's me using my (modified) copy of the very groovy Behaviour to extract all the JavaScript from my nice clean semantic HTML(or JSP (*cough* bleh)), and wondering how I ever lived with the mish-mash I just rescued.
Generally speaking, Behaviour rulesets are applied on page load, (hence Behaviour.addLoadEvent), whereupon all the selectors in all the rulesets are evaluated and all the events are assigned and added.
However, whenever the page changes (as a result of the Prototype Ajax.Updater, for example), the standard way of ensuring that the rules still apply is to call Behaviour.apply() again, which re-assigns all the events to all the elements etc.
BUT, we have two issues here. Fit the first: suppose one element matches two selectors. If we assign our events in the normal way that works 90% of the time, as follows (to take a simple example):
".button" : function(el) { el.onclick = pressButton(el.id); },Here, two events (one for each rule) would be assigned to the same element, and the first one would overwrite the second one since el.onclick can only ever be one thing.
".important" : function(el) { el.onclick = warn("Atchung!"); }, //...
For this element:
<a class="important button">click</a>
The standard answer to this is simply to use addEventListener (FF/W3C) or attachEvent(MS). This solves our first problem. A second possible answer is to adapt Simon Willison's addLoadEvent code to enable the addition of multiple functions to arbitrary events, as follows:
/*functionally equivalent to the original*/
function addLoadEvent(func) {
addEvent(window, "onload", func);
}
/*generic version*/
function addEvent(obj, evt, func) {
var oldEvt = obj[evt];
if (typeof oldEvt != 'function') {
obj[evt] = func;
} else {
obj[evt] = function() {
oldEvt();
func();
}
}
HOWEVER, when adding events using addEventListener, attachEvent, or addEvent, subsequent calls to Behaviour.apply() will re-add new events to the same elements, with the result that events added via Behaviour will fire once for each re-application of the given ruleset, i.e. potentially far too many times.
The code that demonstrates this is as follows:
var f = function() { alert("foo"); };With this example, using the above addEvent code (and the addEventListener etc. (although I haven't tested it and I'd look mighty stupid if I'm wrong)), the "foo" alert will appear three times. This is not what we want.
var o = new Object();
addEvent(o, "bang", f);
addEvent(o, "bang", f);
addEvent(o, "bang", f);
o.bang();
My solution was to write my own event queue for which the "add" function was idempotent. This means that adding the same function three times (as above) has no effect on the second and third time. (More properly, an idempotent operation is one in which one or many executions is functionally identical.)
This requires the ability to compare functions, and luckily the toString() of a Function object returns the source of that function - the ideal way to find identical chunks of code.
So, without further ado:
/*
this is functionally equivalent with the original - for reverse
compatibility
*/
function addLoadEvent(func) {
addEvent(window, "onload", func);
}
/*
usage as above, e.g. addEvent(element, "onclick", function() {} );
generalized and adapted from
http://simon.incutio.com/archive/2004/05/26/addLoadEvent
*/
function addEvent(obj, evt, func) {
var oldFunc = obj[evt];
if (typeof oldFunc != 'function') {
obj[evt] = getEvent();
obj[evt].addListener(func);
} else {
if(oldFunc.__EVT_LIST) {
obj[evt].addListener(func);
} else {
obj[evt] = getEvent();
obj[evt].addListener(oldFunc);
obj[evt].addListener(func);
}
}
}
/*
this could be put within the above function, but that causes a memory
leak in IE
*/
function getEvent() {
var list = [];
/*
this extends the array instance to allow
for string comparison of functions
*/
list.hasFunction = function(val) {
for (var i = 0; i != this.length; i++) {
if(this[i].toString() == val.toString()) return true;
}
return false;
}
/*
this is the actual function that is called when the event is fired.
if any of the listeners return false, then false is returned,
otherwise true
*/
var result = function(event) {
var finalResult = true;
for(var i = 0; i != list.length; i++) {
var evtResult = list[i](event);
if(evtResult == false) finalResult = false;
}
return finalResult;
}
/*
this is the function on the event that adds a listener
*/
result.addListener = function(f) {
if(f == null) return;
if(list.hasFunction(f)) return;
list.push(f);
}
/*
this is a debug function - feel free to remove
usage example: window.onload.list();
*/
result.list = function() {
var log = "";
for(var i = 0; i != list.length; i ++) {
log += "<pre>"+list[i]+"</pre><hr/>";
}
var wnd = window.open("", "Event dump");
wnd.document.write(log);
}
/*
this is a semaphore to ensure that we play nice with other code
*/
result.__EVT_LIST = true;
return result;
}
Enjoy. I think a nice touch is the ability to list the events attached to an element by calling (e.g.) window.onload.list(); but this is generally debug stuff.
I should sleep.
"Referer" header not set on HTTP requests originating from assignment to "window.location" variable on IE6
This one is annoying. Suppose you were to click on the below link:
In both Firefox and IE6 the "Referer" [sic, TBL we love you!] header is set to the URL of the page on which the clicked link existed, e.g.:
However this code, which is functionally (but not semantically) equivalent:
Omits the "referer" header from the request it generate. (Ignore for the moment that the example is deliberately facetious. In RL, the onclick might call a function that might call a confirm that might change the page location.)
Why does this suck? Because you may want to be able to launch an operation sequence from a view page, and then return to that page to view changed state upon completion of that operation. And you might want to do the same operation from multiple view pages. Which means that you have to keep track of where you came from in order to direct the user back to the same place afterwards. This is an ideal use case for the "referer" header.
However, if you decide to direct the user to a new page that (for example) had it's URL constructed in JavaScript, then this becomes annoying. The workaround to this trivial, stupid bug that I only need because I'm using a nasty hack is as follows:
(Normal caveats apply, i.e. this is probably me being ill and sleep-deprived and casting about wild accusations concerning specks in the eyes of MS developers while smashing windows with the redwood stuck in mine, but anyway.)
<a href="http://google.com">Google</a>
In both Firefox and IE6 the "Referer" [sic, TBL we love you!] header is set to the URL of the page on which the clicked link existed, e.g.:
GET / HTTP/1.1
Host: google.com
Referer: http://ianso.blogspot.com
However this code, which is functionally (but not semantically) equivalent:
<span onclick="window.location='http://google.com'">Google</span>
Omits the "referer" header from the request it generate. (Ignore for the moment that the example is deliberately facetious. In RL, the onclick might call a function that might call a confirm that might change the page location.)
Why does this suck? Because you may want to be able to launch an operation sequence from a view page, and then return to that page to view changed state upon completion of that operation. And you might want to do the same operation from multiple view pages. Which means that you have to keep track of where you came from in order to direct the user back to the same place afterwards. This is an ideal use case for the "referer" header.
However, if you decide to direct the user to a new page that (for example) had it's URL constructed in JavaScript, then this becomes annoying. The workaround to this trivial, stupid bug that I only need because I'm using a nasty hack is as follows:
function goTo(url) {
var a = document.createElement(a);
if(!a.click) { //only IE has this (at the moment);
window.location = url;
return;
}
a.setAttribute("href", url);
a.style.display = "none";
$("body").appendChild(a); //prototype shortcut
a.click();
}
(Normal caveats apply, i.e. this is probably me being ill and sleep-deprived and casting about wild accusations concerning specks in the eyes of MS developers while smashing windows with the redwood stuck in mine, but anyway.)
2006/01/05
MUSIC! CODE! MONKEYS!
(I should be asleep.)
Firstly, 50FOOTWAVE have published "Free music" which is a 5-track EP of good hard stuff. Available in MP3, etc. and also FLAC. I'm so used to MP3 that the quality of FLAC is like a breath of fresh air.
Secondly, and on this subject: I was recently re-acquainted with just how beautiful a decent record player with nice can sound when given decent vinyl to groove on, esp. Dire Straits guitar solos and classic jazz. So given how I come across new music (friends give it to me, sometimes on a wholesale basis), an ideal music infrastructure begins to look as follows:
Firstly, 50FOOTWAVE have published "Free music" which is a 5-track EP of good hard stuff. Available in MP3, etc. and also FLAC. I'm so used to MP3 that the quality of FLAC is like a breath of fresh air.
Secondly, and on this subject: I was recently re-acquainted with just how beautiful a decent record player with nice can sound when given decent vinyl to groove on, esp. Dire Straits guitar solos and classic jazz. So given how I come across new music (friends give it to me, sometimes on a wholesale basis), an ideal music infrastructure begins to look as follows:
- MP3 for general music consumption
- CDs for archival & liner art
- Vinyl for when the music is just so good.
- Lisp, spreadsheets, and RagTime (from it's description) all seem to embody a spirit of computational fungibility that shows what computers should be like in future. The reason I can't take the aforementioned 3 programs/languages/environments and produce said Nirvana is because computers suck.
- Closures in JavaScript are nice. var f = function() {} and all that makes events much nicer to use.
- I've been forced to confront my instinctive fear of big, sophisticated IDEs from gigantic megacorporations. I'm worried that their code is so smart that my job will become no more challenging than that of the average Visual Basic droid. This would suck.
- This may simply be post-Microsoft-IDE trauma. I remember VB4... VBA... generated code that should never have seen the light of day... *shudder*
- Legitimate reasons for ignoring these things still exist, lock-in to their own evolutionary path being the biggest and baddest.
- This is why, if I have to move intelligence into code, I'd rather it was open-source code. That way, when I've trained Rhesus monkeys equipped with build scripts to construct web applications based on my legion of sequence diagrams, then I could hack on the code to make the computer do even more of the boring stuff computers are good at and which I detest. (repeat after me: a good programmer is a lazy programmer...)
- Speaking of Work:
- Time was, people would sleep through Winter 'cos there were no crops to harvest and no light to work by. Humanitys biorythms are adjusted to this pattern.
- Now, we work 8-hour days all year round, and waking up at 7:00 in the dark is a truly crushing way to start the day (not to mention Brussels public transport.)
- Therefore, why not work 10-hour days for half the year, and 6-hour days for the other half, when I'd rather be in bed? Eh?
- I'm currently reading a translation of Les Miserables by Victor Hugo (who is a genius), and it truly is an absolutely incredible work.
2006/01/02
Prototype-style shortcut function for XPath expressions
Background: using Prototype is allegedly like using crack - immediately gratifying and bad for the whole application. The wisdom of using for(in) notwithstanding, I dunno.
Anyway, the nicest things of all in Prototype are the shortcuts: $() and $F(), which make my life much more chilled out (pass the pipe dude,) and so I hereby introduce an equivalent for XPath munging: $X().
This needs Sarissa to paper over the differences between IE and FF for this to work. Needless to say, Safari users can go swivel until Apple gets of their butt and improves their XSLT support. </flamebait>. The commented out 'Logger' is using Lumberjack syntax.
Anyway, the nicest things of all in Prototype are the shortcuts: $() and $F(), which make my life much more chilled out (pass the pipe dude,) and so I hereby introduce an equivalent for XPath munging: $X().
This needs Sarissa to paper over the differences between IE and FF for this to work. Needless to say, Safari users can go swivel until Apple gets of their butt and improves their XSLT support. </flamebait>. The commented out 'Logger' is using Lumberjack syntax.
Usage is as follows:function $X(xPathExpr, ctxNode, doc) {
if(!ctxNode) {
ctxNode = document;
}
if(!doc) {
if(ctxNode instanceof Document) {
doc = ctxNode;
} else {
doc = ctxNode.ownerDocument;
}
}
var result = doc.selectNodes(xPathExpr, ctxNode);
if(result == null) {
//Logger.debug("no match for "+xPathExpr);
return null;
}
if(result.length == 1) return result[0];
return result;
}
- $X("//p") returns all paras in a document.
- $("//p[@id=foo]") returns one para with id foo.
- arg[1] optionally specifies the node context (relative root)
- arg[2] can specify a different document to work on, for example one retrieved via XMLHttpRequest.
Coffee and laptops:
If you can afford a decent laptop, you should also make sure you have nice thick ceramic coffee cups, and saucers, to go alongside it.
Decent coffee cups are harder to tip over than disposable plastic cups.
Posted because I recentlykilled severely maimed a laptop in the time-honoured tradition of all over-caffienated coders.
Decent coffee cups are harder to tip over than disposable plastic cups.
Posted because I recently