Saturday, December 08, 2007

I Love Libraries

I think one of my favorite things about going to work for IBM is my access to the library we have here in Austin. I love to read (online and off line) about technology and I'm always trying to hone my skills and pick up new skills and tricks. So far, I've read Practices of an Agile Developer, which is great by the way, and I'm reading Practical Development Environments right now. Before I started working, I had to scrounge whatever books I could find, but now I have more access and I can't wait to read more. I already have about eight other books I plan on reading as well as another seven or so useful references such as the awesome Bash Cookbook from O'Reilly. It has been incredibly helpful in helping me improve my skills with Bash. Once I get further along in reading Practical Development Environments, I'll put up my thoughts on the book. So far it has been a very interesting and fun read.

Friday, December 07, 2007

Fun with javap

I finally was able to try out javap for the first time today. I needed to know what version of Java a class file was compiled in and javap came to the rescue! To determine the version simply run javap on the class.

javap -verbose my/compiled/class

This will dump a bunch of information about your class file. The only thing you are looking for is the major version up at the top. It should look something like this.
$ javap -verbose aTest/service/EJBHelloService
Compiled from ""
public class aTest.service.EJBHelloService extends
SourceFile: ""
RuntimeVisibleAnnotations: length = 0x15
00 01 00 24 00 03 00 1C 73 00 25 00 26 73 00 27
00 28 73 00 29
minor version: 0
major version: 50
Constant pool:
const #1 = Method #16.#42; // javax/xml/ws/Service."":(Ljava/net/URL;Ljavax/xml/namespace/QName;)V
const #2 = Field #15.#43; // aTest/service/EJBHelloService.EJBHELLOSERVICE_WSDL_LOCATION:Ljava/net/URL;
const #3 = class #44; // javax/xml/namespace/QName
const #4 = String #39; // http://service.aTest/
The versions of Java go like this:

Java 6 50
Java 5 49
Java 1.4.2
Java 1.3.1
etc ...

Thursday, November 08, 2007

Creative Commons: A New Perspective

Presentation Zen pointed me to a very interesting presentation from Larry Lessig at TED today that made me look at the impact that Creative Commons has on copyright and sharing. If you haven't watched the video, you should. You won't regret the 18 minute presentation Lessig puts on, it really opens up your view on copyright and its effects on creativity. That, and his presentation itself is very informative and well put together.

Copyright and the issue of copyright law has been a greatly debated topic for a very long time now and its apparent that these debates have formed two camps of extreme opposites. On one end you have those who believe everything should be copyrighted, protected, and controlled - that 'its the law.' The other end of the spectrum comes in the form of copyright abolitionists. While I wouldn't say I am a copyright abolitionist, I definitely lean more towards major copyright reform than towards copyright protection.

The main point I got out of Professor Lessig's presentation really comes down the fact that copyright law is outdated. It puts too many restrictions on using content creatively. Because of these frivolous restrictions, generations of children are growing up being labeled as pirates for creating new content from other content. Professor Lessig puts this growing resentment best saying:

A generation that rejects the very notion of what copyright is suppose to do. Rejects copyright and believes that the law is nothing more than an ass to be ignored and to be fought at every opportunity possible.

I agree with that feeling of resentment 100%. I have felt that resentment myself and it springs from reactions on both sides of the extremes. As the extreme copyright camp enables technologies that can automatically remove content that contains copyrighted material whether or not the content falls under fair use. Laws have also remained inflexible towards this new concept of reuse and remixing a work into something new; because of this, it still remains illegal to use copyrighted work for such uses. While this does not stop people from reusing copyrighted material, it keeps them resenting the fact they are labeled pirates or thieves. The fact that they are living against the law has a strong impact on them. Because so many legitimate uses are labeled as piracy, it makes the creators resent all copyright law and they try and fight it at every opportunity.

This really is a tough situation for artists and businesses alike. While many artists would love to see copyright abolished entirely, businesses just aren't ready to handle that proposition just yet, if ever. This is where Creative Commons steps in. Creative Commons acts as a sort of intermediate license that allows artists to skip over all the copyright muck we're in right now and use CC licensed material for remixes freely. Creative Commons license have options that allow you to license your work in a way that suits you best. At the moment, Creative Commons is one of the best licensing options for works because it give you flexibility to let people create more interesting work from what you have without any encumbrances of the older more restrictive licensing schemes. The other great thing about a Creative Commons license is that they are non-exclusive so you can publish your work under a Creative Commons license for non-commercial use and then license that work to a business for a fee instead.

Overall, Professor Lessig's talk at TED really gave me a better insight into the reason the Creative Commons licenses were created. The license allows the people who were constantly living against the copyright law to finally do what they've been doing in a legal manner. This also allows business to continue using copyright without disturbing those doing remixes. Hopefully the Creative Commons licenses will begin to make a push towards a more general copyright reform to allow more remixes and reinterpretations on works that aren't licensed under Creative Commons.

Wednesday, November 07, 2007

Finally, Gmail 1.5

I finally got the upgrade to Gmail's new interface and I'm quite pleased with what they've done. I'm not sure I would call it Gmail 2.0 since the contacts section is what got most of the attention, but it definitely needed some love. Aside from the nice new upgrade to contacts, I'm glad they finally made the mouse hovers more useful. I always thought it was pointless just to show me their email or picture.

The old and useless version:

The new and improved version:

Tuesday, November 06, 2007

Open Source Phone

After Google's announcement of the Open Handset Alliance, I got to thinking about what an open phone platform would mean to me. The first thought that came to my mind was about OpenMoko and how those two might integrate, because to me, OpenMoko was one of the original open platforms for phone development. What Google has done with the Open Handset Alliance, is take the OpenMoko idea further and used their clout to gain support from industry leaders. This is a good thing.

Enough of my rant and back to what I would like to see from an open mobile device.

Seamless desktop integration

To start, I'll go with something simple that would enhance my phone experience greatly. While many phones sync with the desktop, the one's I've seen/used have always been very clunky and unintuitive. What I'd really like to see in a mobile device is one that seamlessly integrates itself with my desktop. I shouldn't have to make sure my calendar is up to date or if my contact information is synced, it should just do it. It would also be nice to have some synergy between my phone and desktop where photos captured on my phone are geo tagged and displayed in order of where I took them. It would be fun to see where and when I took pictures or sent texts throughout the day. Basically, I don't want to have to fumble with syncing my phone and apps, it should just happen automatically.

Increased location awareness

GPS is one of the most fun aspects of modern mobile devices. Almost all cellphones have GPS and many other hand held devices do as well. This could lead to some really interesting uses of the technology. It would be nice to be able to have a location aware todo list available. For example, when I'm near the grocery store, my phone could remind me that I needed to pick up groceries. Another example I got from a teacher of mine was to place the phone in theater mode so that within a certain distance of the theater, the phone automatically goes into silent mode.


I'd also like to allow people to see my availability status using their phone. Much like messengers have away, busy, available, etc... this would be a great way to remind people I'm in a movie or in a meeting and to leave a message instead or call later. This could also be tied into my personal calendar to automatically put me as in a meeting or unavailable at the scheduled time. Phones, much like IM can be very disruptive and by adding presence to my phone I can help remind people I'm not available all the time.

Anything I want

Ultimately, what an open platform offers me is the ability to do what I want with the phone and my data however I choose to. This is a huge step forward from locked, proprietary phones where only your service provider was special enough to access the phones full capabilities. With this increased openness we can begin to explore the full potential of our phones and mobile devices hardware. We will truly begin to see a second 'social web' forming with the interactions of thousands of interconnected mobile devices. I for one, am very excited at this push for openness and I am excited to see the new uses for our mobile devices.

Sunday, November 04, 2007

Tools of the Trade

A programmer's tool set and work bench are meant to help them leverage the full range of their skills on the code they are working on. I've been learning the power that a solid workbench and a few sturdy tools can put in your hands when they are used properly. One of my favorite books The Pragmatic Programmer: From Journeyman to Master has this to say about a programmer' tool set:
Tools amplify your talent. The better your tools, and the better you know how to use them, the more productive you can be. Start with a basic set of generally applicable tools. As you gain experience, and as you come across special requirements, you'll add to this basic set... Let need drive your acquisitions. (Hunt & Thomas, ch 3 para 3)
One of the most important tools a I can use as a programmer is a solid command shell. I knew many students back at Neumont that never touched the command shell. To be honest, it is sad that they have never been taught how to use this powerful asset. When I say command shell, I don't mean Window's meager cmd, but a full featured shell such as Bash or Ksh. Fortunately for Windows users, Cygwin allows the full usage of the Unix tools.

One of the first things I do to set up my development workbench for Windows is to install Cygwin and the tools that it offers me such as: ls, find, which, grep, sed, cron, tail, and the Bash command shell. It's amazing to me how much more productive you can be when you begin to learn the real power these tools have. I personally had been raised in a Windows environment where many people rely on gui tools and very few people knew how to do much at all on the command line. During my time in college I began to realize how inadequate the gui is when you need to do more advanced commands. For instance, it is not easy to find out what *.java files had been modified in the past week to include a reference to the swing libraries.

While I'm no expert yet at using these tools, I can tell you that they do indeed make your life much more productive when you are working with code. I have gained quite the appreciation for tail and grep when it comes to looking through log files to find what I need. Since I'm in the test world these two commands alone have saved me considerable amounts of time looking for errors and stack traces. One of the greatest things about the Unix tools though, is the philosophy of making tools that do one job well and allowing them to be chained together for even more powerful results. This allows you to use the tools in ways their creators never even imagined. I can search through files, sort them, remove duplicates, etc... all by piping a few simple commands together. It is definitely worth your time to learn how to use these tools to your advantage.

After you've had some time to get used to the command shell, you should look into working with a solid text editor such as Vi/Vim, or Emacs. While many people rely on an IDE to edit any source code, I find that with a good text editor at my side like Vim I can edit text much more quickly. I'll admit, I still use Eclipse for my projects, but I definitely use Vim for any text that I need to edit such as XML, properties files, config files, etc... What I would really like to find, is a good Vim plugin for Eclipse because now that I've gotten used to Vim's keybindings, I try to use them everywhere. It's hard to deny the power of a good text editor, and Vim allows me to work with text faster than I ever was able to before.

In the end, if you haven't given the command shell or a good text editor a try, you need to spend some time trying them out. I definitely suggest that you try multiple shells and editors and find one that is comfortable to you. Once you do that try to use them as much as you can, and as you start getting better at them you will see your productivity increase considerably.

Image credit:

Sunday, October 14, 2007

Blog Action Day: Conserving Energy

A while back I decided that I would take part in the Blog Action Day and write about an environmental issue that meant something to me. Well, I've been busy moving to Texas and starting a new job so I haven't had much time to think about it until now. As I've always been very interested in computers and technology I've noticed a growing trend thats just now being dealt with as a problem. That problem is inefficient design in computers and electronics causing them to consume much more power than they should be. I've never understood why you wouldn't want to design devices with efficiency in mind, but it has often been the case where efficiency is thrown by the wayside. So, for my Blog Action Day pledge, I'm going to spend some time looking at ways you can improve your computer's energy efficiency.

Get an 80+ Power supply
One of the biggest wastes of electricity on the desktop comes from the power supply. Many PC power supplies run at around 60% - 72% efficiency with the rest being dissipated as heat. Since energy costs are very high in many places, running your computer 24/7 can have a noticeable impact on your power bill. By switching to an 80+ approved power supply, your guaranteed to have at least 80% efficiency at 20%, 50%, and 100% power supply usage. This is great news for anyone running their computer as a home server, router, etc... as you can have significant energy usage reductions and lighten up that power bill a little bit.

On a side note...

While reading Jeff Atwood's Coding Horror blog I noticed a very interesting point he made. He makes the point in choosing the right wattage for your power supply. All too often people put in the equivalent of a 500 horsepower engine just because they 'might' need it some day. The problem with that lies in the design of a power supply. They are notoriously inefficient at low power consumption, so while your computer is idling, the power supply is eating the most power it can. So when you are buying your 80+ power supply, please consider how much your rig will need and buy accordingly!

Get an LCD monitor
Many people stare all day at the largest energy monger on their computers. Thats right, your monitor can be voracious consumer of electricity. If you own a CRT monitor, its time to ditch that heavy thing and move to the LCD age. LCD monitors consume roughly one third of the energy a CRT consumes [1]. There are also many other environmental issues you are escaping when switching over to LCD. For instance, CRT monitors have a high amount of lead[2] in them making them difficult to dispose of as well as exposing many people to the effects of lead if disposed of improperly. LCD's have many advantages over CRT's (they weigh so much less!) that its time you consider getting rid of that old CRT (properly, of course) and investing in a shiny new LCD monitor.


Turn on your computers power saving features!
This is a a very simple, very free way to help conserve energy. Simply open up your power settings and pick a few reasonable settings. Have it turn your monitor off after so much inactive time, spin down those disks, put it into hibernate or suspend, just turn the power saving features on!

Wednesday, September 05, 2007

I missed the search train?

I just noticed that Google Reader has the long overdue search feature. When did this happen? I must have missed it while I was travelling to Austin I suppose. Either way, I am extremely happy to see this feature finally implemented. Let us all rejoice in the Googly searching goodness that has now been set free on Google Reader. Yay!

Monday, July 09, 2007

Compiz Fusion Time

I got bored a couple of days ago and I installed Compiz Fusion for the first time. I must say that is really great software and even in its early alpha phases it is very impressive. It looks like much of Beryl and its settings manager were used for Compiz Fusion's settings manager as well as many if the effects. A couple of changes though, the cube now can have a reflection as well as a new way to move through the desktops using ctr+alt+down. There are many other changes they have made to Compiz Fusion but I'm too lazy to write about them right now so enjoy my little video I made of my desktop.

Tuesday, July 03, 2007

Ubuntu Fun

I just recently installed Ubuntu Feisty Fawn on my T60 again and I have to say I'm loving it. Most everything is running smoothly and I have most things I need installed. It makes me happy to see that the Sun JDK 6 is in the main repository along with Netbeans, Java DB, and Glassfish. Well, Netbeans and Java DB are sort of in the repository, you still have to download Netbeans and Java DB before you can actually install them.

Other than that my experience with Feisty has been going quite well. I have managed to set most everything set up easily and without any major problems. I even have managed to install IBM Rational without messing up my laptop. The only thing I haven't set up yet is XGL. There are a couple reasons for my delay there. First, Compiz Fusion, the merge of Beryl and Compiz is still in an alpha phase and second, I have always had problems getting the ATI driver working properly and getting all the features I need working well with Compiz. I probably will get to installing it later though, because of this video I saw.

All in all, Compiz Fusion is awesome and I don't plan on going back to Windows any time soon.

Thursday, June 14, 2007

Bus Factor = 1

I haven't been as prolific on this blog as I had hoped a few months ago, but thats how life goes. You get busy with school and you don't have time to write. Since I have a few minutes I thought I would write quickly about something I learned at the end of this quarter at Neumont.

Always bus-proof your projects.

Many of you developers know what I'm talking about, but I'll explain what I mean by "bus-proof." Think about a development project that you have been on for a moment. If someone on your team was hit by a bus, would the project be able to continue? How many people would have to be hit by a bus until the project couldn't continue? Thats basically the entire concept of the bus factor of a project.

I learned the hard way this quarter that you should always bus-proof your projects. The project I was working on this quarter had a bus factor of 1. Can you guess what happened? *dramatic pause* He got hit by a bus. OK, so he didn't really get hit by a bus but he wasn't able to work on the project anymore or even help us out with any questions we had.

After he got hit by the bus we were basically SOL after that.

Although the project ended up being a complete disaster, I did learn something from this experience. I found that there are a couple of different ways that you can help to bus-proof your project

Always comment your code.
If your code isn't commented then the next set of developers behind you will have no clue what you were trying to do. Always comment anything that you had to hack around or make a decision on why you wrote the code the way you did. This way no one has to guess your intentions in the code.

Document your decisions
You are going to make many design decisions in your project that isn't going to end up as a comment in some source code.It makes life much easier if you just document why you decided to use pattern x over pattern y or why you chose to use Guice over Spring. You get the idea.

Have a roadmap
One thing that I have found is very useful to a project that is going to be around for any amount of time is the roadmap. By documenting what you plan on doing with the project and give it direction, it is much easier for new developers to see where you have been and where you are going.

Limit the scope of your work
One of the easiest ways to give your project a bus factor is to allow one person to have their hands in the whole project. It is easy to see why this is a problem because if only one person has a grasp of the whole project Murphy's Law is going to take its toll. There are a couple of solutions I see to this problem though. First you have more than one person with an intimate knowledge of the project and second... DOCUMENT YOUR PROJECT. (See the tips above)

As you can see, these are just a couple of ways you can help bus-proof your project. As a good programmer you should always be looking for different ways to keep the project's bus factor down. I definitely learned the value of keeping the bus-factor down at the end of this quarter.

Monday, June 11, 2007


Just a little thing I found out. To copy a directory with SCP just use the -r flag. (r for recursive duh!)

Tuesday, May 08, 2007

Automate This - What to Automate


Automate This! is a series of posts about my interest in software development, automation. I'm going to start off by talking about what you can automate and what should be automated on software development projects. Since automation can be useful in more than just software development, you can rest assured I'll rant about anything else you can automate in later posts. But, enough of that little side note, lets get back to what I was talking about, automation. I want you to think back about the projects you have been on. I know it can be scary to think about some of the things you have worked on, but just keep thinking.... OK, now that you have a few projects in mind, how many of them had automated some aspect of the project. The build perhaps, and unit tests (if you actually have unit
tests!) probably are the first things that come to mind, but is there anything else your project automated? If you have had the experience I've had so far, then that is probably about all that comes to your mind.

Where to now?

So where do we go from there? What else can be automated? Is automation really that useful? I know when I first started getting into automation, these were some of the first questions that came to my mind. To answer your questions, yes, automation really is that important. Who really wants to have to remember to run the test suite, or start the build before you leave work? I know I don't, doing those by hand is very error prone. Not to mention the fact that if I had to remember to run the tests, they never would be run! As for what else can be automated, that one depends on what your project needs to have done in a consistent, repeatable manner. Making sure documentation is updated and packaged is a great process to automate, deploying your application is another great automation task. Do you have any code that is generated? Automate it!

As you can see, there really are many tasks in a project that can be automated. Since each project is different, you are going to have changing needs for what you automate. To deal with the changes from project to project, I keep a simple guideline as to what should be automated on any project I work on. If it needs to be repeated, generated, and done consistently, it should be automated.

Some Things Should Always Be Automated

Otherwise, it's really up to you on what you can and can't automate on a project. Although you may have varying aspects of your project to automate, I have found there usually is a core set of things that you should always automate
in a project. My list is pretty short, so read it

Code Generation - Anything that needs to be generated (SQL Scripts,
Web Service Skeletons, etc...)
Tests - Any tests you have, Unit, Regression, Integration, etc...
Documentation - BOTH Developer Documentation (such as Javadocs) AND
User Documentation
Builds - I don't care what you use, just automate it!
Packaging and Deployment - Usually an extension of the build, you
should automate putting the project together as you would release it to the public. If its a web application, learn how to automate its deployment to the web server and you will
thank yourself later.

Go Forth and Learn

OK, well now that we have our introduction to project automation, go out and learn more about it! Look through other projects code and learn how they handle automation and definitely learn how to use a build tool such as Ant, the ubiquitous Make, or the trendy Rake. In my next post, I'll introduce you to these three build tools and show how they are used on different projects. Until then, have fun learning your build system!