Saturday, 31 December 2011

Using a DS1307 clock with a Teensy

This is a fuller description of the clock I put on Google+ yesterday.

The purpose of this is not so much to build a clock but to verify that I can make a ds1307 clock work with with a Teensy++ 2.0 board. Since I had an LCD lying around I hooked that up as well to display the time and date.

The ds1307 is a clock on a chip. This variant comes as a neat little package with a watch battery on the back. This is important because once power is disconnected from the Teensy it 'forgets' the current time. The ds1307 clock remembers it and keeps on ticking so when I power up the Teensy it gets the current time from the ds1307 and uses that. The ds1307 also has a few bytes of spare memory which, like the time, is also battery backed. So stuff I store there also survives loss of power.

The Teensy is a nice little package with a AT90USB1286 processor, a USB port and a load of pins I haven't figured out properly yet. Unlike the Arduino the Teensy's USB is supported directly by the processor. That means we can fully mimic USB functions, which I'm not interested in yet, but I also gather it means loading programs into the board is faster (though I haven't used the Arduino). This processor has loads more memory than the Arduino UNO, though there are other variants of Arduino which may be more comparable. I program the Teensy using the Arduno IDE, however. That means the Arduino libraries for device support are available, as well as all their samples. There is a custom boot loader in the Teensy which some people might not like, ie not completely open source. It's freely downloadable though.

The LCD is a KS0108-compatible 128x64 and I adapted the demo program for that to simply add the time pulled from the clock. All wired up it looks like the image on the right.

The ds1307 is the red board just left of the LCD. Almost all the wires are for the LCD because the clock only needs 4 wires

The code is adapted from a demo for the LCD so there is a bit of rubbish in there that I could trim out if I were less lazy. The vital bits are the references to RTC and RTCE. RTC is a reference to the DS1307TRC class which manages fetching the time from the ds1307. The whole trick, really is this line:

setSyncProvider(RTC.get);

which tells the time library to pull the current time from the ds1307 so when we request the time using:


time_t currentTime = now();


we get the time from the ds1307. Actually there is some buffering going on in the library which I don't have to care about.


But that doesn't quite get everything I want because the time in the ds1307 doesn't know what time zone it is in. The simplest way to handle this is to just set the time to the local time and forget this issue, but it will not look so simple when we change from/to daylight savings, or when I travel to a different time zone.


Besides, that extra non-volatile memory is quite attractive too, so I added a subclass of the DS1307TRC class called DS1307TRCE, which has a couple of extra methods to save and fetch stuff from that memory. In this case I pass a struct called config using this:

RTCE.read1((char *)&config,sizeof(config));

this reads from the extra memory and copies it into the config struct. There is a similar write1() method which does the opposite. What do I use this for here? I use it to hold the time zone as an offset from GMT. Having fetched the config struct I do this:

time_t currentTime = now()+ (config.zone*SECS_PER_HOUR);

That corrects the time. The current value for config.zone is +13, when we change to winter time it will be +12. I also store a string in the config struct but I don't use it yet. I anticipate storing other stuff in there though, depending on how I want the eventual device to behave. This is nicer than hard coding things.

To change it I have another program which gets data from the serial interface. The development environment allows me to just type in stuff and it ends up in the config struct which I then write1() to the ds1307. But I might eventually build a UI on the eventual device so set what I want (when I have decided what I want it to do...)

Friday, 9 December 2011

Competing on Content

This is probably not huge news everywhere but it is stirring the waters a bit in the publishing community.
Amazon have announced a deal for writers who want to directly publish through them. I won't bore you with the details here because you can easily find out more elsewhere.
But the thought I had about this is that Amazon is trying to compete with other stores on content. The deal includes an exclusivity clause so if you have a book you want to publish under this deal you cannot sell it elsewhere eg Apple, Barns & Noble, Sony etc. You can only sell it through Amazon. This makes a lot of sense for Amazon and I see why they want it.
There is also a more vague clause which prevents you from selling something that 'competes' with the book you put on Amazon. many people are taking a benign definition of 'competes', ie it means you can't just change the title of the work and submit it elsewhere. Again, I see their point. But it is always dangerous to assume the benign definition in an enforceable legal document. It might, for example, be interpreted to mean other works of the same genre by the same author. And Amazon may have no intention right now of it meaning anything sinister, but over time the people you deal with change around and you can find someone less benign on the other side of the transaction clutching the document you signed up to, so it is best it means what you want.
My point here is that competing on content serves no one but Amazon. If only Amazon has the book you want to buy then they can set the price and level of service they want. If it is only worth publishing your book on Amazon then the same applies. We all become deal takers.
From everyone else's point of view (both readers and writers) we want the likes of Amazon to compete with each other on price and service, but never on content.
We want them to pay authors the best royalties and charge readers the lowest price.
We want them to combine that with awesome searchability and bang on recommendations (and I have no problem with Amazon in this area).
But we want them all to be able to supply any book ever published, ideally, with those same levels of price and service.
Sometimes Amazon will be a little ahead of the others, sometimes it will be Apple, sometimes it will be B&N etc. And we will reward the ones who keep up in this game, of course. But we will only maintain high standards if there is more than one, and we must be able to make our decisions regardless of content.

Friday, 25 November 2011

Organising Ivy repos

Edit: since I migrated most of my projects to maven this is less relevant to what I do now.

This is about a trick I use to organise my open source projects.
Unlike in-house projects, open source projects have to allow anyone to download them and build them so they cannot, for example, be dependent on some local library to resolve dependencies.
I use Apache Ivy as my dependency manager and ivyroundup as my public repository. So when my project's ivy configuration needs to resolve, say, slf4j, it can find it and download it automatically. I don't need to keep all the jar files in my project itself, which means I don't need to maintain them etc. So this is good.
If I only had one project we could stop there, but I have several and some are dependent on others. So I might make a change to madura-date-time which is dependent on maduraconfiguration. In the process I might identify a change to maduraconfiguration I need to make. So I do that, publish maduraconfiguration to ivyroundup, then go make my change to madura-date-time and... find I need to make another change to maduraconfiguration. Oops.
Clearly I need a more efficient way to do this and I have one, but before we go there we need to understand that these projects are all in Eclipse, and that means they end up having their dependencies resolved two different ways.
For the official build I use Apache Ant. There is a build.xml file which resolves dependencies, compiles everything, runs the unit tests, packages it into a jar file and even uploads it to Google. It also generates the required configuration files for ivyroundup. You don't need Eclipse to run this, just a Java JDK and Ant.
But for moment by moment development Eclipse runs compiles and, when necessary, resolves dependencies using the same ivy.xml and ivysettings files that Ant uses. But how it finds the repository is a subtly different.
The ivysettings file is an xml file that tells Ivy how to find the repository. This is what mine looks like:
<ivysettings>
    <property name="ivy.cache.dir" value="${user.home}/.ivy2" />
    <property name="roundup-repository" value="http://ivyroundup.googlecode.com/svn/trunk" override="false" />

    <settings  
            defaultResolver="libraries"           
            defaultConflictManager="all" >

    </settings>
    <caches
        checkUpToDate="false"
        defaultCacheDir="${ivy.cache.dir}/cache" />

    <namespaces>
        <namespace name="maven2">
        </namespace>
    </namespaces>

    <resolvers>
        <chain name="libraries">
            <packager name="roundup" buildRoot="${user.home}/.ivy2/packager/build" resourceCache="${user.home}/.ivy2/packager/cache">
                <ivy pattern="${roundup-repository}/repo/modules/[organisation]/[module]/[revision]/ivy.xml"/>
                <artifact pattern="${roundup-repository}/repo/modules/[organisation]/[module]/[revision]/packager.xml"/>
            </packager>
        </chain>
    </resolvers>
</ivysettings>
The resolvers specify where the repository is. You can have more than one but I only need one just now. It uses a property called "roundup-repository" which is defined in the third line and, of course, it points to ivyroundup.
Now comes the tricky part. I want to point to a local repository instead of ivyroundup but I want nothing in the project that I commit (and which others would download) to see that repository. I just know that if it is something I have to edit out before I commit I will forget.
When building under Ant this is easy. I can just define an Ant runtime property in Eclipse that overrides "roundup-repository" so if I launch the build from Eclipse the override value will be used, ie my local repo. Since I define this value as part of the workspace configuration there are no files to accidentally commit because there is nothing in the project. It also means anyone downloading the project doesn't have to set that up.
But when the IvyDE plugin runs it does not see the Ant properties because it isn't Ant (fair enough). There are two ways to override the property. I can define project level properties or workspace level properties. Project level properties will mean I have a file in the project which I have to edit before I commit. So I reject that approach. Workspace level properties are only available if I have a workspace level ivysettings file. But I don't want to do that because that means anyone downloading the project will need to know how to set that up because, of course, it isn't committed.
There is one other way and that is the trick.
I start Eclipse with this in the command line:

-Droundup-repository=file:///home/roger/madura3/ivyroundup-trunk

which points to where my local repo lives. Note that this has to be on the command line after the -vmargs option.
With that in place IvyDE will use my local repo instead of the remote one.
And there is nothing in my project that specifies this so nothing to remember to change when I commit.
What about Ant? Is this enough to tell Ant where to find the repo as well?
Sadly no. I still need an Ant property defining the value for roundup-repository or it breaks. But actually I need a few other Ant properties anyway and I have them in an external file.
Why do I add the symbol to the command line and not to the eclipse.ini file? Well, that is because I run a number of workspaces all of which launch the same Eclipse and hence use the same ini file. But only one of my workspaces needs the roundup-repository defined in this way. Others need other stuff. So the setting does not belong in the ini file.
So, to recap, I have a workspace that uses a local ivy repo without any of the projects in the workspace knowing its location. They only thing they know about is the remote repo, which is all I want people who download it to know about because that is the repo they can access.

Thursday, 10 November 2011

Fricken' Laser!

I decided to make a ray gun. But it had to actually be a ray gun, not something that looks like a ray gun but doesn't do anything. Given the only technology we actually have today that come close to science fiction ray guns is a laser it had to be a laser. I started with this image:
That's a Derringer pistol, a tiny gun produced in the 19th century and favoured by ladies and gamblers for its small size, making it easy to hide. It fired just two shots and I gather it was not too reliable, but I guess waving it about in the right circumstances would be enough. This is way more ornate than I can manage and it isn't a laser, of course. So this is what I made:
And yes, it is a real laser. Lasers are remarkably easy to get hold of these days which should not be a surprise because they are in our mice, in our DVD players and probably a ton of other places. They are quite cheap too. The trigger is a little push-button and the laser itself is the bit poking out the end of the main barrel. It is fairly powerful but I don't think it would pop a balloon. Think laser pointer rather than death ray. Still it does make it a real ray gun.

Just the laser itself was not enough so I added a little peizo oscillator (the black bit on the top and inside the handle is a Teensy which drives it to make an interesting noise when the laser lights up. There is a battery in the handle as well. It is about 5 inches long (similar enough to the Derringer) and should fir discretely into a pocket or a purse. Mrs gets to wield this thing (I have the vortex manipulator).

Programming the Teensy to drive the oscillator was interesting. The oscillator is a very simple thing, it can only play one note at a time, no chords. But I can send any note by frequency and control the timing etc. I can make it play a tune if I want, in this case it make a buzzy sort of noise which seems appropriate.

Saturday, 17 September 2011

Vortex Manipulator

Anyone who has watched Torchwood will recognise this image.
That's Captain Jack's Vortex Manipulator. It's a multi-purpose tool, I guess that means the writers can make it up as they go along. One of the things it can do is time travel with some unspecified limitations. During crossovers with Dr Who the Doctor makes disparaging remarks about it. Also there are a couple of times in Torchwood episodes when time travel would have given Jack an easy solution to the current problem but he doesn't use it. It does look pretty darn cool even if (appropriately enough) it also looks a bit gay.










For me the coolness overwhelms the (inappropriate in my case) gayness so I decided to make one. Not exactly the same, though. I wanted some differences. I wanted to get rid of the aluminium look for a start because to me it clashes. Something more steam punk like copper would be better. And I wanted it to actually do something. Not necessarily something useful but something.
So here is mine.

Yes, it is a bit bigger, but it actually does something. It detects Daleks. Actually it does some other stuff as well. The black square is actually a Microtouch which has a bunch of stuff wired into the smallest touch screen I could find. If you go to the link you'll see that I didn't have to build the electronics myself, except that I did have to get the ON button exposed using some simple soldering. You can just see the button above the top of the screen. The Microtouch's button is actually on the underside which is too hard to reach.

Assembling it was quite tricky. The Microtouch is embedded in a carefully shaped block of an expoxy mixture embedded with ceramic particles. This makes it feel like stone. Then I sprayed the top with copper paint (nicer than aluminium). There are no attachment lugs on the Microtouch and it seems kind of fragile. So it slots into the space in the epoxy snugly enough to hold tight with no screws. The leather on the bottom holds that end down. If I had to I could lever it out again.

After that it was a lot of messing about with leather, something I had not tried before and Mrs had to pitch in with the sewing after I did the cutting out and gluing everything in place. The domes and stuff were hardest to get right because they don't handle lots of layers of leather. It is bleached sheep skin dyed to get this colour (had to figure that out too).

This shows it closed up. The battery (a 1000mAh lipo which is very flat) is tucked into the underside of my wrist.














There's a trick here. If I open the near side it opens up as shown in the first image. But if I open the back domes I can get at the underside of the electronics, enough of them anyway. I've made various incisions into the oval block of epoxy to get at (clockwise from the left) the reset button, the ON button (the wires lead from the PCB to the button embedded in the top), the USB port, the Micro SD card slot, and the power supply.














Remember I said it should do something useful? I can program this thing in C and load the code down through the USB port. Actually I had to trim a lot of the plastic off my Micro USB cable to allow this. It was either that or make the gap bigger in the epoxy which would have weakened the structure. But back to programming it. The Microtouch has an accelerometer on board so I built a Dalek detector based on that. It does not detect real Daleks (did I need to say that?).
It looks a bit like a radar screen and it goes red with a tiny image of a Dalek when it gets some movement on the x-axis. After a while it goes away again. The positioning of the Dalek and the time it appears is randomised. This stretched some very old muscles for me because back in the '70s I did some assembler programming on micros (they were a lot dumber back then). But since then I have got used to more-or-less infinite amounts of memory and vast off-the-shelf libraries of stuff to do simple things like generate random values. The Microtouch does have a nice library for generating shapes like circles etc which helped with the geometry, but randomising I had to figure out for myself.

So, target achieved. It looks cool and it does something kind of useful. I may yet think of something actually useful yet but this does it for now.

Sunday, 7 August 2011

Looking for the Higgs Boson in BPM

I've been doing stress testing on my Business Process management (BPM) framework. What this involves is launching hundreds of processes into the system and watching to see that they all complete in the expected way. Because there are hundreds the watching part has to be done by a testing framework I've built. The test framework is multi-threaded and the BPM framework is multi-threaded so there are a lot of threads bouncing around in all kinds of hard-to-predict ways. Not actually unpredictable, but too hard to try. What I need to know is the outcome rather than exactly what happened.

Until something doesn't have the right outcome, of course. Then I have to trawl through the logs and figure out where something went wrong and why. It can be something simple like the process wasn't defined the way I thought, or it might be complex and the result of some interaction between threads.

It struck me that this is a tiny bit like what those clever people looking for the Higgs Boson have to do. In their case they accelerate protons to enormous energies and slam them together. Then they pore through the results to see what happened. Because each run generates terrabytes of data (mine generates less than a meg) they have to use really, really smart tools to sift through the results. When they find something it is usually the results of a second or third level of particle fragmentation and they have to work backwards to see what it was that was produced in the first place without being able to directly observe it.

So that is a lot harder than what I am doing, obviously. But the errors that crop up in my framework are also sometimes caused by something that happened two or three steps beforehand, and that means I have to work backwards as well.

Fortunately I have far less data to work with and there aren't any undiscovered particles involved that may or may not be there.

Saturday, 30 July 2011

Two projects updated

I have been mostly working on Madura Rules lately but I took a little time out to update Madura Bundles and the schema builder.

madura-bundle-3.0

The change of major version means it is slightly incompatible with the previous version. Specifically you no longer need to run the init method on the bundle manager (if you do call it nothing bad happens so not really incompatible). I also added a timer to the file scanner. This means it is ludicrously simple to add new bundles to a running system. You just copy the jar file into the scanned directory and wait. Good things happen.

I also simplified the docs a bit because the previous docs were written as I developed the idea and needed pruning down to just telling you what you need to do. I also simplified the examples a bit.

schema-builder-1.2

I added support for composite keys and a colleague verified that (after some changes) it works okay with Postgres as well as Oracle. We still don't handle inheritance and ManyToMany relationships because there is just not enough information in the db definitions to derive them. So you still need to edit the resulting XSD file to finish off the exercise.

Previously the fetches were all set to EAGER but I changed them to LAZY. Of course you can edit this too, or adjust it with JAXB.

Edit: MaduraBundle has been moved to github.

Tuesday, 21 June 2011

Madura Rules Part 5

I've been working on, and posting on, other topics but I did say I would mention I18n issues and how Madura Rules handles them. In fact the heavy lifting is done in Madura Objects, when it isn't done by standard Java features. If you don't know about I18n at all then go here.

The specific problems we need to solve with this are:
  • Labels for fields should be in the selected language
  • Drop down lists should be populated in the correct language
The first thing to notice is that in a multi-user application, like any web app, we have a problem. Web requests arrive with a locale specified, which is great, but we need it 15 call levels into the application, which is not. Either we pass the locale in a lot of methods or we do something smarter. We could use Locale.setDefault() but that is not specific to this request. Other requests will be forced to use our setting, or we will be forced to use theirs. The setting is JVM wide. Not what we need.

We have a similar issue with the .properties file. That is a little different because we can use Spring to inject the file where we want it, this will automatically pick the right version of the file according to our locale. But it is too limiting to have everything in .properties files and there are lots of dynamically instantiated classes that I need to inject it into, so I can't use Spring there.

We can solve all these issues with a small factory class called MessageTranslator. While I almost always use Spring injection this time I'm using a factory because it gives me a simple way to get a class that is local to the current thread and hence to the local request. I know Spring provides various scopes for beans but this is simpler.

So, MessageTranslator allows me to construct an instance of the class and store it on a ThreadLocal. The class holds the Locale and the MessageResource. The MessageResource is Spring's nice way to handle those .properties files. But we will come back to that.

Early in the processing of the request, ie when I still have easy access to the locale, I create a MessageTranslator for this thread with the locale and the MessageResource. Thereafter I can get a locale based message like this:
MessageTranslator.getMessageTranslator().getMessage(String code)
The getMessage() method is overridden to allow me to pass arguments and a default message if I need it. But the point is I do not need to pass the locale or event the MessageResource. Nor do I need to inject the MessageResource to use it.

The code is simple enough and you can find it in Madura Objects at
nz.co.senanque.localemanagement.MessageTranslator

So what do I use this for exactly?

First, whenever Madura Objects fetches a field label it calls the MessageTranslator assuming the name being used is actually a key to the resource. So if you have a field labelled 'amount' then it will look up your properties file for that key and return what it finds. For the English properties file you would have amount=amount but for the French file you would perhaps use amount=quantite. Also, if you failed to put a label on a field then the field name will be used.

To make this work you create a bean in the Spring file like this:
<bean id="messageSource"
 class="org.springframework.context.support.ResourceBundleMessageSource">
 <property name="basenames">
 <list>
  <value>Messages</value>
 </list>
 </property>
</bean>
and then inject that somewhere convenient for you to create the MessageTranslator for this request. This variant shows that you can have a list of properties files, though we only used one here. This is Messages.properties. The French variant would be Messages_fr_FR.properties but you don't need to specify it in your Spring file, it will be found automatically.

So far so good. We can deliver the right labels for the language. But what about the values in the drop down?

Here it gets a little tricky because, while we can just create a properties file for everything, we may be loading values from a database or other external source. Those values might change and we won't want to rebuild our application with a new properties file every time they do. We also want to be able to maintain the alternate language data in a similar way to the primary data. That means if it the primary data is stored in a database we want the translated names stored there too, not in a properties file we forget to update.

Fortunately Spring helps us out here. We can just adjust our messageSource bean so that it looks like this:
<bean id="messageSource" class="nz.co.senanque.i18n.XMLMessageSource">
 <property name="resource" value="classpath:/Messages.xml"/>
 <property name="parentMessageSource">
 <bean class="org.springframework.context.support.ResourceBundleMessageSource">
  <property name="basenames">
  <list>
   <value>Messages</value>
  </list>
  </property>
 </bean>
 </property>
</bean>
I'm using a custom class (XMLMessageSource) that extends org.springframework.context.support.AbstractMessageSource. AbstractMessageSource allows you to inject a parent source and in this case I am using my Messages.properties file. If XMLMessageSource fails to find an entry it will delegate to the parent and look there.
My XMLMessageSource is fairly simple, it looks up an XML document for the right entry and it handles multiple locales which may be specified in the document. I use this for testing and illustration rather than production. You can replace this with a class that looks up a database or whatever. The net effect is that any lookups go to the first class and then to the second (and possibly to more places if you can use that).

When an application is building a drop down list it calls nz.co.senanque.validationengine.FieldMetadata.getChoiceList() which gives a list of ChoiceBase objects. Each of these has a key and a description. The key is the real value, the value we want to store in a database etc and the value we would compare with in a rule. The description is the display value. If we call ChoiceBase.toString() we get the translated description. So the description becomes the lookup code and you can store the codes and the translation for each language in the same place.

It depends on your UI architecture how well this actually works, but it works very nicely with Vaadin. Vaadin supports POJOs as updateable UI objects and Madura Objects are exactly that, though they have extra metadata as well which we want to expose to the UI. But more about UI next time.

There is one other I18n issue to cover and that is the messages generated from rules when there is an error. Recall that we can have a rule that look like this:
constraint: Customer "check the count: {0}" [invoiceCount]
{
 invoiceCount < 10L; }

This says we only allow up to 9 invoices on a customer (just an example, not something you would do in the real world). If this constraint is violated the message 'check the count...'
will be generated and the argument(s) listed in brackets will be applied. Now, what if you want this message in French?

When the rules are generated a properties file is generated along with the Java called messages.properties and all of these messages are placed in it with appropriate labels. The MessageTranslator is used to resolve the message code so this messages.properties file must be
included in your messageResource bean somewhere for it to work.

You can, then, supply alternate messages.properties files of your own, eg one in French. As long as the locale MessageTranslator is initialised properly the right things will happen.

There is another properties file that is used by the validation engine in Madura Objects. This is ValidationMessages.properties and it holds all of the error messages generated by the validation. As usual it must be part of your messageResource bean. You can supply your own translations of these as required. So your messageResource bean will look something like this:
<bean id="messageSource" class="nz.co.senanque.i18n.XMLMessageSource">
 <property name="resource" value="classpath:/Messages.xml"/>
 <property name="parentMessageSource">
 <bean class="org.springframework.context.support.ResourceBundleMessageSource">
  <property name="basenames">
  <list>
   <value>Messages</value>
   <value>ValidationMessages</value>
   <value>com.mydomain.rules.messages</value>
  </list>
  </property>
 </bean>
 </property>
</bean>

Next time I will post about using all this in a Vaadin application. There is, it turns out, very little to do.

Saturday, 11 June 2011

Ubuntu 11.04

I've just upgraded from Ubuntu 10.04 to 11.04. It went well. Here are my upgrade notes:

The machine is a (now aging) Dell Inspiron 9400. I've had this machine a while. When I was thinking of upgrading it a while ago I new the replacement would come with Vista and I'd heard nothing good about that, so I switched to Linux, Ubuntu 8.04 to be precise. It ran 7x faster, really.

Just a little of that might have been that I used a new disk drive that might be faster. A decent chunk was that there is no need to virus check every file I save to disk. And everything pretty much worked. Some small tussles with the second monitor I use but nothing major. Harder things like Bluetooth connections etc work out of the box with no messing about.

So this is the second upgrade I've done to a Linux box, and the third Linux system I've had running on it. I need to note that the way I do it is start with a clean disk and do a complete install, not an in-place upgrade. I use the upgrade exercise as an excuse to clear out the rubbish that builds up. You can do an in place upgrade if you want.

There is just one thing I'm not sure I like. They've revised the Gnome UI and made it quite a bit different. I can probably get used to it, but I can also set the user option to Classic (it's on the login screen) and it looks like the old one. This might not be available in future versions though, so I'll probably have to get used to the new one eventually.

The biggest thing I have to do, then, is reinstall the old software. Most of this comes from the distros making it trivial, but I do have to walk my old data over too.

Oracle: Oracle has a deb file that I can download and install. But after the install I'm not done. I need to run this: sudo /etc/init.d/oracle-xe configure
That asks me for a new port number (default 8080 I use for other stuff). Once that is done it runs fine.

FireFox: I need my bookmarks. I can export them to an HTML file on the old system and import into the new.

Thunderbird: Just copy the files from ~/.thunderbird in the old system to the new (actually that applies to a few other things, but I'm selective, these things are sometimes used as caches and I'm trying to clean up here.

Eclipse: Just copied the whole directory over to the same place on the new drive. I keep mine in /opt/eclipse36 (ie 3.6 ie Helios). BTW do not do this from a Windows system. There are o/s executable files in the Eclipse directory tree. A lot of stuff will work but some of it won't.

Java: I need to have both Sun Java 6 and Sun Java 5. 6 comes out of the distros just fine. For 5 I just copied the old directory over (to /usr/lib/jvm, ie next to java6)

I had a little trouble getting my local svn permissions right but that is all described here.

I notice there's one improvement over 10.04. Previously if I shut the lid it either wouldn't suspend or it would suspend but not want to come back up. Under 11.04 the suspend seems to work.
I haven't missed suspend much because the system boots so fast shutting down and restarting is no bother. But it's nice to see it there.

The one thing left that doesn't quite work in 10.04 is scanning with OCR. There are OCR programs for Linux but after I read the docs I decided they weren't ready, too hard to use. So I went to Windows for OCR. It doesn't come up much, and someone will tidy the OCR up eventually. Possibly they already have, ie haven't tried that out. It won't be o/s sensitive, though, so I don't expect 11.04 to make any difference there.

There is one other thing. IE doesn't work on Linux, of course. I have three internal work sites that don't like FF. I've not had problems on public sites though, so you probably don't care. My work around is to use Chrome which works okay. The screens look a bit funny but they do work.

But overall, this is working very well indeed. Installs in Linux have got almost boring these days, but since most people still run Windows (for some reason) I get geek points when my colleagues notice my Linux system.

Tuesday, 7 June 2011

Is this Science Fiction?

Does anyone else feel like they are living in a science fiction movie?
No, not the Matrix, that one (well the way it starts, anyway) is too easy.

I grew up on a farm near a small town. The most high tech thing we had was probably the automatic washing machine, and that broke down a lot. My father got a second hand one so he had replacement parts for the first and he seemed to tinker with it often. I used to read stories about spaceships and watch shows like Star Trek.

Today I was at Bangkok airport. It is huge and new and feels very 21st century. Security guards ride about on segways, everyone (including me) has a headset of some kind. I got on a plane, which isn't a spaceship, though from the inside the difference is a bit subtle. It flies and it gets high enough for the atmosphere to be unbreathable. It goes faster than I can comprehend and, oh yes, I can watch TV on board! Not just what someone is broadcasting, I can pick what I want to see when I see it.

When I was a kid I thought TV was pretty flash, but I could only watch what someone else broadcast and there was only one channel. That was the choice: watch or not. Actually, if I want I can watch videos loaded on my phone (as long as we aren't taking off or landing).

Now I'm at Singapore airport, another huge place and so big they have trains on overhead rails to carry me between terminals. That is so like space ports I used to read about long ago.

We don't have flying cars or a moon base yet, but it feels like we're most of the way there.

Saturday, 21 May 2011

I missed the Rapture. Again.

I just have to add my 2c worth on the Harold Camping Rapture prediction since nothing much happened today, well certainly not anything like the 'return of Christ'. Some of us think He is around all the time anyway so that doesn't make a lot of sense really.

Here's a little thought experiment.

The Rapture happened in 1998. Only about 20 people went and they're on various missing persons lists around the world. Most of them were Buddhists, so it came as a nice surprise for them. For the rest of us, well, we're still here and we have to put up with what we have. Fortunately that's not too bad. How would you disprove this?

But, even stranger, the earth was created in 1922, 28 September, 18:02:23. I can do numerology too, you know. It appeared with the rest of the universe fully formed. The people who were instantly created then were loaded with 'memories' that convinced them they had been around for a while. This is similar to the dinosaur bones being buried to make the earth look far older.

Yes, this is complete nonsense, and impossible to disprove as well.

Today's prediction was accompanied by people offering to look after your pets if you were taken. The carers are certified atheists and regularly blaspheme to ensure they'll stay behind. But how do they know your dog won't get Raptured as well? There's no evidence either way on that, I think. And I wouldn't be so certain the atheists (like the Buddhists) won't get a surprise on the day. A lot of people seem to miss the 'saved by Grace' bit in the Bible. It's actually the most important bit. But it is not a way to get anyone to do what you want, so it is often overlooked.

Still, we're still here, so far anyway. Mrs pointed out that the prediction probably didn't take into account the lost 11 days in the calendar. So maybe we should wait 11 more days. Or not.

I'm frantically working towards a trip to Thailand next week. Mrs also pointed out that maybe we should check they are still there before heading out. Maybe it really was Buddhists who get Raptured.

Wednesday, 18 May 2011

Mac Defender is malware but not a virus

You're walking along a dark alley.
"Psst, wanna try some of this?" The stranger is holding a hypodermic needle and you can see some green fluid in the vial. "It's real good."
"Oh, sure, why not." You hold out your arm.
Really???

With just a little more social engineering this is what people are doing to get the latest Mac 'virus'. Of course it is called a computer virus but it isn't. If you got ill after accepting the shot of green fluid you couldn't say you caught something 'accidentally' which is how you'd catch a cold. You did something really dumb and suffer the consequences.

Strictly speaking the 'Mac Defender' (it goes by other names as well) is a Trojan that relies on social engineering. This is a fancy way of saying that the bad guys convince you to install it by pretending it is something else and then you are screwed. Trojan is from Trojan horse, which was a horse statue the Greeks gave to the Trojans, but they filled it with soldiers first. When the Trojans took it inside their gates the soldiers jumped out of the statue and attacked. The rest is history.

Mac Defender pretends it is from Apple, which is certainly not Apple's fault. I'm not a fan of Apple but they are squeaky clean here. People download this thing, install it, give it their root password and then find it insists on showing porn images at random moments (inevitably the worst moments, of course) and claiming there is a virus on the system. They then ask for money to remove it. There's a suggestion that if you actually give them a credit card they always say it didn't work and ask for another, taking the details though.

But this is very, very different from the other ways you can get malware.
  1. Worm. This is when something out on the internet finds an open port on your machine and slips in. You didn't do anything, other than leave a port open, it just crept in when you weren't looking.
  2. Dumb Trojan. When you think you're just opening an email attachment or browsing to a URL and in behind evil stuff happens.
In both of those cases you could reasonably assume the computer would protect itself. In the Mac Defender case you actively overrode all possibility of the machine protecting itself, which is quite different.

Operating systems like Mac and Linux are based on Unix which have some inbuilt protections that make it very, very hard for malware to break in as Worms or Dumb Trojans. We have to accept that the odd security bug in the operating system will arise (and will be quickly fixed) but it is generally true that Unix based operating systems do not see this kind of malware.

It is not the case with Windows which is lacking three advantages Unix has.
  1. The execute bit. To be executable a program file must have the execute bit set. This is not set by default on, say, attachment files that you save. This means that malware code has to figure out a way to get you to set the bit, usually manually, so you have to know.
  2. Root access. Unix has a strong separation between the privileges of the admin or root user and the rest. People don't normally run as root unless they really have to because, say, they are installing software. So just running some program either from a network port or from your desktop is limited in the amount of damage it can do. Malware writers find these limitations boring. They want to trick you into giving them root access. Again, that's going to be a manual thing you know about.
  3. The distros. Unix software typically comes from packages distributed by distros rather than downloaded from random sites. It is unlikely that malware gets into these distros, but if it did it would be cleaned out very quickly. For Windows users: the distros work a lot like Windows Updates but they update everything and install new software. Unix can do this because the software is generally free so it doesn't have to figure out how to charge you. I'm not sure what Mac's distro arrangements are.
But the only way to guard against malware like Mac Defender is to deny users root access to their own machines. This is why a lot of cell phones don't come 'rooted' by default, they don't allow you to be the root user. We already see this trend moving into tablets and maybe it will be found in laptops and desktops soon too.

Sunday, 1 May 2011

Schema Builder


Say you have an existing database. You access it with JDBC and you maintain it with SQL scripts.

Life is hard because you have to coordinate changes to the SQL with chages to the Java. This is why Hibernate and similar tools were invented. There are tools which will look at an existing database and build the right entity classes so you to can use JPA.

My experience with those tools has not been very good. But once you have the entity classes you can say those classes are the 'master' definitions of the database objects. You don't maintain SQL anymore, you generate it from those entity classes. You get all kinds of advantages with this which the Hibernate people can tell you about.

But you can go a step further. Say you decide that you would really like all your entity classes to have the @Cache annotation added. Say you want them all to have a specific toString() method. If you are using the entity classes as master you have a lot of editing to do.

Using an XSD file along with HyperJAXB3 to generate your entity classes means the XSD is now the master. For the @Cache and toString() cases you can just change the generation options and rerun the generation. This is very easy. You have a lot of other options around injecting standard code into your entities that you did not have before. Some of them (like toString()) may require you to write a JAXB plugin but most, like the @Cache, do not. This is because there are a lot of useful plugins already out there such as Annox and Madura Objects.

But you still have a problem. How do you get from your SQL database to your XSD file without writing it all by hand?

That's where you use schema-builder. It scans your database using JDBC and generates the XSD file you want. It figures out the OneToMany and manyToOne relationships based on the foreign keys you've set up already.
There are some restrictions:
  • Not all JDBC data types are supported, though the most common ones are.
  • ManyToMany relationships are not handled
  • We don't handle multi field keys.
  • Inherititance relationships are not detected.
 To get around the last three just generate the XSD and edit it. You should regard the generated XSD as almost correct but which needs tidying around those three points. To add more data types needs code changes, not hard but there are a lot of obscure data types and I just added the common ones.

Monday, 25 April 2011

The coming icon crisis

I'm looking at my phone. It is a Samsung Galaxy S so it has a pretty display with a lot of icons on it. For example the one to take pictures is a camera, the icon to play music is a disk, looks like a CD, with a musical note in front of it. The icon to show video has a film reel. The icon to get at the ebooks is a hardback book. Messages is an open envelope.

Notice anything? Well in a few years all of those icons will be meaningless. Even now I'm not sure how many people remember film reels we used to use for movies. CDs are fast being replaced by downloads, books (ie dead-tree books) will be replaced by ebooks. The camera is, of course, now something that looks more like a phone. How long before the idea of a camera that isn't inside a phone vanishes. Sure the pros will probably still use specialised devices, but the rest of us will rarely see those.

All of the icons will then be just... a phone.

Though the cultural images do last longer than the things themselves. The near-universal signs for toilets are the silhouetted figures of a man and a woman... well a human wearing a skirt and a human NOT wearing a skirt to be precise. But there are not so many places where women wear skirts of knee length depicted. They wear trousers, shorts, short skirts, long skirts, loose skirts, tight skirts. They almost never look like the silhouette.

So in 20 years or so will the kids look at the movie reel icon and think 'video'?
Maybe they will. It does also have the triangular arrow pointing to the right that video clips have. Maybe they've future-proofed that one.

The Twitter icon and the K9 (email) icons are based on the brand name, not so much illustrating the function as who provided it. That works, and maybe they will morph into more generic icons so that we'll think K9 even when we're using a different email client (assuming K9 becomes that widely popular). But Biro did it with pens, so it isn't out of the question.

Saturday, 26 March 2011

A decent laptop

My old laptop, a Dell Inspiron 9400 I got about 4 or 5 years back, has lasted well. I thought about upgrading a while ago when it was starting to run too slowly. At that time Vista was the operating system that came with all the new machines. Everyone (except one person) I spoke to about Vista had nothing nice to say about it.

So I tried Ubuntu. I've noted here before that, on the same laptop, Ubuntu ran seven times faster than Windows XP. So who needed a hardware upgrade after that?

The machine is still fast enough. But one of the hinges on the lid is dodgy. It won't be long before that comes apart and then I'll have a problem. But there are no other issues. Everything works as it always did (well, it runs faster than when I first got it because then I ran XP). Still, a broken hinge needs sorting and for an ageing laptop the best answer is to upgrade.

That's where I hit a problem. The laptop has a truly marvelous display. 17" and resolution is 1920 x 1200. It is a bit reflective so I need to avoid back light, but it is also really bright. The high resolution means I can fit more stuff on the screen. I usually run a second monitor as well. Lots of screen real estate is important to me.

So I'm looking for a 1920 x 1200 display. I'm not so worried about the processor but it might as well be pretty good since I expect to keep the new machine a long time. There's my problem, though. Dell don't do any 1920 x 1200 displays now. I have two choices: Macbook Pro and one of the HP Elitebooks.

I'm surprised I'm considering a Mac, but it seems I can run Ubuntu on it so that makes it just as much a possibility as the Windows machines. Whatever I buy I'll have to pay for an OS I won't use. So, okay, I'll take a look. The main downside of the Mac, according to the reviews, is the price. But the local price for Macbook Pro, the one I'd buy, is $NZ4,111. This is a faster processor than the HP EliteBook 8740w WZ085P which is $NZ7,260.

So the Mac is an obvious choice on price here. It is a nice looking machine. It weighs less than the HP and the block aluminium construction appeals. So does the subversion of running Ubuntu on it. People will peer over my shoulder and say 'you got a Mac? Gone to the dark side?' 'No, I'm running Ubuntu.' 'Didn't know you could' 'Yep, take a look...' This kind of conversation appeals to my vanity.

But I looked a little closer. To run a second monitor it seems I can't just plug in a VGA cable. I have to get some optional extra to convert the display port on the Mac to VGA or HDMI or DVI (each of my 3 monitors supports at least 2 of those, the Mac supports none). The Mac has a fancy battery which cannot be removed. When I take a long haul flight one of the things I do is pop the battery and plug in to the airline seat power supply. If the battery is in the machine this doesn't work. Again the problem is solved by adding another adapter to the mix. So that's two adapters I have to carry now.

Then there's the hard disk. What I do fairly often is pop the hard disk and slip in another one. I have various reasons for this, but one of them is when setting up Ubuntu I would take out the supplied disk and put in an Ubuntu one. If I have any warranty issues I'd replace the factory hard disk and call support. That way they could use their diagnostics with no surprises. But it seems that popping the hard drive on a Macbook Pro involves taking the machine completely apart and voids the warranty.

So, although there are lots of good things about the Mac, it doesn't deliver what I need. I'm also not convinced I should have to spend so much to get what I want from HP, so I'll get this current laptop repaired and keep it a bit longer, until some decent machines come out (ie good display, fast processor, no gotchas). Neither of the machines I've looked at hit all those buttons.

Thursday, 3 February 2011

Weblogic and JNDI

Unlike JBoss and Glassfish there is no way to define a custom JNDI object (eg a URL) in the WebLogic console. We need this to define the location of various configuration files (see maduraconfiguration) and we locate these files using a URL defined in JNDI

So I cooked up something over the last few days that solves this and put it in weblogic-jndi-startup. in google code.

It makes use of the startup classes, a feature of Weblogic I've not seen on the other servers (haven't needed to look, though) and you can configure arguments passed into startup classes from the console.

I've written just one simple class which I can deploy and configure on Weblogic (I'm using version 10.1) specifying the JNDI name, value and type. So now I can define the JNDI name and value in the console.

People doing the deployment like this kind of thing because it makes their lives easier. I think Weblogic really wants me to write a 'deployment plan' which is a way to edit the values into the web.xml but that is, frankly, too much to expect from people responsible for deploying our applications into multiple environments. The syntax is too hard and every time there is a mistake there is a support call with the kind of hard-to-diagnose problem we all want to avoid. Plus we need a solution that is portable across multiple flavours of J2EE.

Share and enjoy!

Sunday, 30 January 2011

VirtualBox and USB scanner

I wasted some time this weekend trying to get my scanner working on Ubuntu+VirtualBox again. 'Again' because it was working okay back on Hardy but when I tried it recently it was not working. This is probably because I upgraded to Lucid a few months back and haven't used the scanner since, well not on VirtualBox anyway. The reason for that is that there are scanner drivers that come with Lucid that work on my Canon scanner. So no need to use the Windows ones. Yay!

Not so fast.

I have a need to do OCR scans. Not many but I tried various tools available on Ubuntu and while I could make them work, the error rate was way too high. I've seen much better on the tools that come with my scanners. But they run on Windows. Sigh.

Anyway there are tricks to this. You have to make sure you get the right version of VirtualBox. There's the ose (Open Source Edition) which doesn't have USB support and my scanner is a USB one. That's fine, the non OSE (called something like PEUL) version supports OSE, and is free for personal use. But you have to download it from the site, not from the usual repositories. Okay.

A little more complexity comes in when you get the latest version (4.x) instead of 3.x because the rules changed. There is only one version now plus extensions you install on top of it and those handle USB. Actually I gave up on that and went back to 3.2.

Then you need to make sure you have a user group called vboxusers and that your username belongs to it.

With that in place it still did not work for me, although Windows could see the USB devices (well the devices showed up in VirtualBox and they had a tick beside them). But no joy. What fixed it was this post. It worked fine after I unchecked the USB 2.0 box in VirtualBox settings.

Now I can scan from Windows, more importantly I can OCR.

Monday, 3 January 2011

New Smartphone

I got a Samsung Galaxy S just before Christmas and I'm still impressed. My previous phone was an SE G502, not 'smart' which is what my immediate comparison is. Things I like about the Galaxy:
  • Voice dialing. I'd heard that this was a bit awkward on Android 2.1 when using a bluetooth headset, mainly that you had to pull out the phone and press a button (issue-1412). For the G502 I had to 'record' my voice saying the name for each target I wanted to call by voice. Not bad, but the Galaxy just understands who I meant and finds the name in the contacts. Nice. It seems to be accurate enough as well. I only need to press a button on my headset to initiate a call, on Android 2.2 (Froyo), which is what I want.
  • Bluetooth generally: Pairing with my headset (Sony DR-BT21G) was no sweat. The buttons all work for volume and changing tracks etc.
  • Music management: I walk a lot and I listen to music when I walk. But on the G502 I never got around to organising music into playlists. But the Galaxy interface (which is just the standard Android player) is easy to use so I did.
  • GPS: I heard there were problems with the Galaxy S relating to GPS prior to the Android 2.2 upgrade. I put off buying until that upgrade was available. It also let me see how committed Samsung were to future upgrades. Anyway, I've had no issues with the GPS. I tried out the navigation operation on both a walk and a drive, and it worked just fine. I noticed it adjusts the 'turn left in 300 metres' up to 'turn left in 600 metres' according to the car's speed. Nice.
  • Tasks/Email. Well, it is Android. So it integrates with everything Google seamlessly. Not quite, actually. The tasks don't sync along with the email (odd, eh?) but if you install GTasks (free, ad-supported) from the store you've got them. I use tasks all the time now. It is simple to note down a 'must do' and assign it a date, then forget about it until it comes up. Email does the obvious and does sync perfectly well. I can fine tune the tasks and events in Google Calendar using my laptop when I want.
  • Aldiko eReader. I'm working through some books I don't get around to reading. I thought the screen would be too small but it is just fine. The display is very bright, that helps, I guess. I changed it to white writing on black rather than black on white. That way I can read when the light is off and not wake my wife.
  • Video player: I like to pedal my exercycle to keep fit. But I get bored. Now I can watch videos on the phone. I do this very early in the morning so I need the bluetooth headset on to avoid noise. Anyway, this encourages me to pedal longer and I now have time to watch things about JBoss ESB etc that I never get around to during the day. I did try this using a monitor off my laptop a while ago but I couldn't reach the controls so I stopped.
  • Texting: This is the one gripe I have, and I suspect if I look hard enough someone has it solved. I usually send my wife a text when I leave work. It is always the same text, but there doesn't seem to be a 'send again' function. The GE502 had a template I could re-use. Of course retyping the text is far, far easier on the Galaxy, but a resend would make it even easier. I'll keep looking, and I may eventually develop one if I don't find it out there.
  • Browsing: It is so handy to be able to browse anywhere. I normally hook into my home wireless network rather than clock up 3G traffic, but I met some friends in a cafe last week and needed to show them where I live, okay Google maps pulls up a map (actually using the maps app rather than the browser) and it was all so simple. If they'd had one of these I'd just send them the location and they could navigate from that.
  • Weather info: there are various apps that show the current and predicted weather on the main screen. It is accurate enough. A lot of my day to day activities are weather dependent and I find myself flicking my tasks (in GTask) around based on the weather predictions. I know I can look this up on the web (or wait for it to play on the radio) but it is better to have it right there in my face.
Edit: Now I have found out how to copy/paste  the texting issue is gone. The 'long press' is just a new part of the UI to learn.