2004/07/28

The workaround is easy and nice.

((Geek)Blogger.getInstance(blogspot, "http://ianso.blogspot.com/")).postPetGoogleTheory()

"What is Google doing?" is a question on the mind of every geek, and, surprise surprise, I have my own pet theory.

Google (a) sees MSN and Longhorn coming down the pole, and (b) already has Yahoo to reckon with, who (via Overture) are suing over of a central part of Google's revenue stream: adverts.

Why is Google hiring Java and web services guys? First, but not most importantly, to keep Microsoft on their toes. As I said, Google is the master of fire and motion.

Secondly, the meat of my theory: they want to offer search as a service, by dramatically expanding the strength, breadth and depth of the Google API. People will use this service; half of Google Labs is probably using it already, in beta form. Once this service becomes part of the infrastructure, which it will, then the API upgrades and improvements that everyone will expect from Google becomes yet more fire and motion.

That's why XML: Web Services. And why Java? Because that's probably what sits between the lean mean bit machines and the service interface: Java has security, scalability, maturity, portability, and it's not Microsoft - what else would they use?

2004/07/26

Invocation for a temperamental server

(To be said whilst standing over the afflicted machine, with a stout axe in one hand, and the other hand raised in the three-finger salute)

Oh, you poorly configured, obsolete, unmanaged, slow, crippled, sorry excuse for a Turing machine; you fragmentary, pitiful, unreliable, bug-ridden, much abused pile of weasel-droppings; you neglected, unsung, forgotten heap of wires, transistors and rusting machinery:

Hear my cry!

Boot, I beseech thee; and may aeons pass ere your system processes again slow to calcification; may the skies fall before your RAID array again fails; may the stars die before your fans again spin to a stop; and above all else, do so when someone is there to lovingly apply the hard reset, and ensure that someone is not me.

2004/07/24

Whatever happens,

We still have some good things in life: specifically, Ben & Jerry's "half-baked" chocolate, vanilla, brownie pieces and chocolate chip cookie dough ice cream.

Today my credit card gasps in shock, and the patron saint of frugality spins in his grave.

2004/07/22

Wheee!

You don't learn by doing things the easy way.

OK, so I got a new PSU today - a sweet QTechnology 350W gold edition. This was not just 'cos I obsessively upgrade my computer, but also because my old PSU had a rattle. PCs should be seen and not heard, except through speakers or headphones.

Anyway, so now Shodan is totally inaudible unless I pay close attention, runs 40C normal, 64 on max load - i.e. when running the screensaver - and 64 is the Intel's max rating for the 3.2 P4.

As long as I remember to turn the fans up when I shut the windows and jack the heating, then I'll be fine. I'm beginning to have doubts about the case profile though. Something like the new Lian Li would be nice, but I don't think they've yet got the envelope down pat. I'm going to wait for a few iterations and see where they go.

2004/07/21

BELGIUM!

Not that I have a business case or anything.

I know it's typically English of me to talk about weather, but nevertheless: this morning the sun was shining, I was eating ice-cream, people were lounging about on their balconies. As I write this, it is tipping it down with rain, monsoon-style.

This happens all. The. Time. In JULY.

Belgium!

The C-word

I have trouble not feeling powerless.

The word "cyberspace" is complete cliche, with no real clear meaning. A more accurate & descriptive term for "the Matrix" that Gibson et. al. describe is "dataspace", i.e. 3D representations of information constructs.

What most present-day 3D info-navigation tools (Java 3D desktop, ...) keep forgetting is that nobody will ever want to walk into a Word document when they can double-click on it instead.

The user zooming around the data is wrong. The data should zoom around the user.

Navigable geometries artificially imposed on concepts with no relation to real 4-space is newbie eye-candy at best, and a usability obstruction at worst. We have enough scalars to play with without needing to invent more:
  • time
  • space
  • distance
  • money
  • population density
  • nielsen ratings
  • bandwidth

3D "cyberspace" won't be truly useful until we have a widespread method of 3D immersion. As elsewhere, LCD contact lenses are my favourite idea so far.

So, if that's where we want to get, how do we start getting there?

Dataspace is the end result of the ever-increasing visual data density of the 'Net. Given where we are now, how do we increase that?

When we increase the data density, we increase the ratio of what we can learn per pixel.

  • As per VDQI, a decent chart or graph can summarize and accurately convey the meaning of thousands of four-digit numbers.

    Any infographic such as a chart, map, or graph, can, used properly, convey far more information in far less time than the raw display of the source data that dictates the message of that infographic.
  • There's no reason that a network-transported vector-based XML infographic shouldn't link to, or contain within itself, the raw data for anyone who wishes to (a) examine or analyse the figures for themselves, or (b) construct their own representations of the data.

    The important thing about information is that a) I know that it's always there, b) I know how to get it, c) it's easy for me to get. This is what good information architecture is all about. I don't know the dates of each English civil war, but I can get them using Google. Why memorise them?

2004/07/20

Software patents

As per the stories, I wonder if this if just FUD or if MS is really prepared to become as hated, if not more so, than SCO. If they go down this road, all the "MS are evil" crowd people might actually have a solid factual basis upon which to make this claim.

Software patents delenda est.

2004/07/19

I just had an idea

Wouldn't a cybernetic, biomechanical coffee machine produce a richer, more natural espresso than a normal one?

And you wouldn't have to clean it, it would clean itself. And make little baby coffee machines.

OK, maybe not that last one.

2004/07/18

The worst thing about programming

Users. Just kidding.

Unless you're a complete imbecile, you learn things as you work. What this means is that code you wrote a year ago now looks terrible, and what's more you know exactly what's wrong with it and you can't wait to get started on fixing it.

However, there is never time for this. Instead of fixing underlying crappiness, the only thing end-users want is new features, new deployments, new customisation. Nobody will ever understand why you're spending time to test rewritten code that was tested and working exactly the same way (but slower and less elegantly) two weeks ago.

The result is that any code for which you are entirely responsible throughout the lifecycle and which was written in an environment where time is money will always come back to haunt you.

It is not that I have a problem with acknowledging mistakes. It is not that I would rather forget about the code and leave users hanging. It's seeing what is wrong, knowing how to fix it, and being prevented from doing so.


I feel much better now

Life is war against death.

So, yesterday we went rock-climbing in the Ardennes. Outdoor climbing is different from indoor in that nothing is colour-coded, and you can find yourself staring at a blank cliff-face while hanging on by the tips of your fingers and wondering how on earth the last guy got past here. Of course, there's always a way.

While there I met a guy who works as an electrical engineer for a company that makes surgically implanted hearing aids. Turns out there are a lot of similar issues as with the questions below. What I now know:
  1. LCDs can scale to the limits of etching technology, which (as any chip-maker will tell you) can go right down to the nanometer level. Alrightythen.
  2. Bandwidth requirements - there are several ways of putting bandwidth across empty spaces, of which radio is the least efficient. He told me about the current & future solutions to this, but they're currently deciding whether or not to patent their solutions, so I'm going to keep quiet for now.
  3. Detecting brain activity can be done now and is used for people with conditions that prevent voluntary muscle movement. They can use move a cursor across a screen (so 2-bit input) and spell out words, etc.
  4. This is currently done either with intercranial or scalp electrodes. Larger detectors with a continuous surface should provide higher resolution.
  5. The whole area is a boom just waiting to happen.

2004/07/16

Questions I have

Despite this crap, I do not want desensitization.

These questions are all for one reason, which I explain afterwards. I'm thinking of putting them on Google Answers with a non-trivial sum for each question.
  1. To what level can we miniaturise LCDs? Is a 5x5mm 2000x1200 pixel LCD completely impossible?
  2. What are the bandwidth requirements for a normal monitor cable? Can we get this bandwidth via very low power wireless at distances of a centimeter or two?
  3. How thick is a contact lense? What is the maximum thickness for a contact lense?
  4. How much power does the best 1cm squared solar panel generate?
I'm wondering how difficult it would be to make contact lenses with inset LCDs. I imagine that billions would need to be poured into R&D to find out if it's possible. More questions:
  1. Neurons firing produces current. Current produces RF. How easy is it to detect that current?
  2. How hard is it to do this non-invasively? Can we do it for fingers as well as arms?
I'm also wondering how hard it would be to build a passive RF-detector that detects impulses associated with muscle-movement, and use it for a direct interface between the brain and the computer. Wrists and fingers are fragile.

Likewise, this kind of thing would, if possible, be the result of Manhattan-level R&D. The result, however, would be worth it. Air-traffic controllers, navigators, programmers, and gamers all dream of something like this every time they sit down in front of a computer.

Media reporting & Opinion

The Crucible: How the Iraq disaster is making the U.S. Army stronger. Phillip Carter is a very smart guy, who knows his stuff.

This is what I like about the 'net, and hate about the mass media. The media see the analysis and opinion as the "value add", which is horse-hockey unless you're talking about the work of the smart, experienced and knowledgeable military correspondents working for America's biggest newspapers, of which I estimate there are less than ten in total. What the media has and that their competition (armchair analysts) don't have is boots on the ground.

I don't care what the BBC correspondent thinks about Bremer. There are people (like Carter) that are 10 times smarter and 10 times more knowlegeable, 100 times more honest about where they stand on the issues - there's no such thing as an unbiased position - and 100 times less likely to do things like make stupid mistakes over ranks, confuse the Individual Ready Reserve with a draft, confuse soldiers with marines, all things that people like Jason Van Steenwyk take so much glee in pouncing upon.

More. I'm not objecting to media analysis & opinion. What I object to is the unending confusion between that and actual reporting. Reporting is "Two marines killed in Fallujah", analysis is "Marines at war with local community". One belongs in a news report, one belongs in an analysis. There is obviously a very thin line between the two, but very few people make the effort to find it.

Stratfor is particularly good at this. So is the Economist, some people at the NYT and the WaPo, and one guy at the LA Times. I've also been told that Le Monde Diplomatique is supposed to be very good.

Damnit.

Business is a big board game.

Anyone who knows me has heard me say over and over again that (a) learning is a process that should never ever stop, and (b) change is life. No change, no life - if you refuse to change, you might as well be dead.

I'm in a post-anticlimax frame of mind.

Good Grief.

For the record, this makes me absolutely sick to my stomach. Bad, Wrong and Evil does not begin to describe it.
Google's "beta" habit is an extemely effective form of fire and motion. They ignore web standards because to them, web standards are fire and motion.

In review

I specialize in idle speculation.

Gradient is the second software project that I've carried through to completion, and the first for which I've published source code and documentation.

I had no idea just how much work is involved in actually publishing source code properly. I was expecting it to take one month, and instead it took three and a half.

The code itself does very little - and it's difficult to explain the potential it has without some flashy examples, which I don't have at the moment.

I haven't received much feedback on it yet, because I haven't spent any serious time on PR. Even if the thing never grows legs, I've learnt a lot by doing it:
  • It was the first project for which I used Eclipse, which I use everywhere now.
  • Same with Ant.
  • Hyades - a very cool memory profiler. The code is truly impressive, both in it's capabilities, and it's use of system capabilities...
  • As always, every bug you find, you learn a little about programming.
There are a number of things I could do next, but the most interesting one is to help fight software patents. We Shall See.

Camera phones as mice

Never start a blog with a hangover.

Mobile phone cameras combined with pattern recognition software has killer app potential.

Semacode is phone software that uses pattern recognition to extract hyperlinks from 2D barcodes.

Spotcode uses a similar idea for circular codes that enables the user to shift, pan and rotate, essentially using a mobile as a pointing device. The phone interfaces with the display using bluetooth.

The people behind spotcode also see the phone as becoming a storage and authentication system, which is entirely rational - mobiles are the closest thing we have right now to a personal trusted computing base.

The first missing piece of the puzzle is a similar scheme for movement in 3D. Moving the phone back and forth, and tracking the phone rotation over all three axes.

Secondly, as far as I can tell, there's no J2ME API for camera access.

Lastly, both semacode and spotcode are proprietary :-) but that's just me wishing I had something to hack on.

I'd like to see phonecam-based interfaces become ubiquitous for things like public smart displays, info-points, (such as floor plans), etc. This type of interface has the potential to surpass touch-screen technology, which is normally the chosen solution in this kind of situation.

A camera phone that also functions as a normal bluetooth-enabled mouse would be interesting.

Use cases:

  • As with the spotcode website, bluetooth-enabled LCD displays for airport floorplans. Key points: "You are here" info, "Select destination gate" (a perfect example of throwing an interface) combined with route planning, flight information tracking.

  • Corporate floorplans, employee directories (access protected)

  • Supermarket product search (and suggestions)

  • Others I'll think up later.


What the hell?!

My intention is to put an introductory post here, and then forget about this blog for six months.

Then I'll make a one-line post apologising profusely to a non-existent readership for the lack of posts, and promising more content in future. Then I'll put a big animated "under construction" gif on the front page.