Monday, December 20, 2010

Haskell, the language

I started getting serious about Haskell tonight. I have Graham Hutton's book "Programming In Haskell", and I installed hugs and started going through the exercises in the book. (I finished the Matlab book last week.) The installation on my Xubuntu system worked without problems (as usual) and I had fun with the first chapter.

Sunday, December 12, 2010

More fun with matrices

I'm more impressed with Matlab and Octave.

Octave (Matlab also) has a small language that lets one build scripts. This language has two features that give Octave (and Matlab) its advantages.

The first feature is parallel assignment. Programming languages have had serial assignment since the early days of COBOL and FORTRAN. Serial assignment moves one item in a statement, like BASIC's

LET A = B + C

It is a simple operation, easy to program and easy to understand.

Parallel assignment moves multiple items in one statement. Perl has it, and the typical example is

($a, $b) = ($b, $a);

to swap to values. The corresponding code with serial assignment is:

temp = a;
a = b;
b = temp;

Notice that one line of parallel assignment requires three lines (and an extra variable) in serial assignment.

(Actually, COBOL had a sophisticated "MOVE CORRESPONDING" operation that is rarely discussed. It moved one record to another record of a different structure, moving fields based on their names, which meant that COBOL "did the right thing", all in one line.)

The second feature is the colon operator, which generates a sequence of numbers. The expression "1:10" generates a list of numbers from 1 to 10. A variant on the syntax allows you to specify the interval size, so the expression "1:0.5:10" generates the numbers from 1 to 10 in steps of 0.5 in size.

An expression that generates a list is more powerful than one might think. After all, you can generate a sequence of numbers in a loop and every programming language has basic looping. But the expression that generates a list is a compact thing, roughly the equivalent to C's pointer arithmetic. Just as C can use pointer arithmetic for

while (*d++ = *s++);

Octave can use the colon operator for

list = 1:7:365;

for a list of numbers that might represent days in the year.

Combining parallel assignment with the colon operator provides for some powerful programming constructs. Where in other languages we would need to set up loops and control variables, in Octave we can perform a single operation:

output(10:20) = data(1:11);

This moves eleven items from 'data' into 'output'. Without parallel assignment and the list generator, we would need:

for (int i = 10; i <= 20; i++)
output(i) = data(i - 9);

Now, one might argue that the difference between these two code snippets is small, with the difference being one line of code and the latter using familiar constructs. Yet I find the Octave version much easier to read -- once I learned the 'lingo' of Octave.

And programs that are easier to read are easier to write, easier to understand, and easier to debug.

So I'm impressed with Octave.

Thursday, December 9, 2010

Fun with matrices

I did some work with Octave (the GNU variant of Matlab) last night. I performed one of the exercises in book "Engineering Problem Solving with Matlab". It was fun, working on a new tool and on a problem different from the normal work-day problems.

I found one difference in the book's description of Matlab and the performance of Octave, and that was in the zeros() function. This might be due to a difference between Octave and Matlab, or a difference in a later version of Octave (and a later version of Matlab), or possibly an error in the book. Solving this challenge and seeing the plot come out was the most fun of the exercise.

Wednesday, December 8, 2010

Calendars and dragons

I attended the CALUG (Columbia Area Linux User Group) meeting tonight. We had an open conversation about calendar tools and then watched the movie "Sintel". Both were educational. The first was less about technology and more about business models and group psychology; the latter was impressive in the tech and the artistry.

Tuesday, December 7, 2010

Microsoft WebMatrix

I attended the CMAP (Central Maryland Association of .NET Professionals) tonight. The presentation was on Microsoft's new "WebMatrix" development tool, a low-end web page/app editor/builder.

WebMatrix is pretty nice, for what it does. The description "low-end app builder" is perhaps not quite accurate, nor is "high-powered HTML editor". WebMatrix is an IDE and a set of libraries that sit on top of .NET, run under IIS, and serve dynamic web pages. You can program web pages with this funny little language (like PHP or ASP or JSP) and get results pretty quickly. I got a lot out of the presentation.

I also chatted with a bunch of folks, including my friend Ben who wants to attend ShmooCon. Apparently the tickets have already sold out, and he missed the small window.

And bonuses: I picked up a copy of a book on IronRuby (the .NET implementation from Microsoft) and a USB memory stick with Microsoft Windows Phone stuff on it.

All in all, a good evening.

Sunday, December 5, 2010

Adventures with MatLab

On a recent trip to the Book Thing, I found a copy of "Engineering Problem Solving with MatLab", a dual introduction to engineering problems and the MatLab software. For me, the engineering is a stroll down memory lane; MatLab is the new experience.

I installed Octave, the GNU equivalent of MatLab. Octave performed some basic functions identically to MatLab, which got me excited. I have yet to try the advanced features.

The book contains a floppy disk with programs and data (I assume) for its exercises. The Linux systems (//desdemona and //delilah) refuse to mount the disk. Perhaps it is no longer readable -- the book was published in 1993. I may have to get the data from another source.

My plan is to perform the exercises in the book. The idea excites me -- it's a fun, geeky way to learn a new language!

Edit: Apparently the exercises use very little in the way of data from the floppy disk. I had some fun this evening, entering data and displaying graphs. Whee!


Saturday, December 4, 2010

Sharing is nice

I did a little bit of technical work at home today: I configured a Linux system to automatically mount a share from another Linux system. It took less than 30 minutes, even with my fumbling searches for information on the web.

This change makes my home systems a little more web-like, in that the data is not stored on the local PC but on the server -- even though the server is less than four feet away.


Thursday, November 18, 2010

Cloud computing user group

I attended a meeting of a the Capitol Area Cloud Computing user group. The presentation was on salesforce.com, a platform for building cloud-based apps.

I was impressed with the product. It's more than a sales or CRM tool; it is an application platform that can be used for a number of things.

I was also impressed with the enthusiasm of the attendees. They were much more "into it" than last night's audience for the Windows Phone 7 meeting. They salesforce.com crowd was also more diverse, with recruiters, newbies, and experienced developers. (The Windows Phone 7 meeting had only experienced developers.)

This is a likable group, and I plan to attend future meetings. Next month will have a presentation on Amazon.com's cloud offerings.

Wednesday, November 17, 2010

Microsoft User Group

I attended the BaltoMSDN user group meeting tonight. BaltoMSDN is a local group interested in all things Microsoft, and mostly the developer tools such as Visual Studio and .NET technologies.

Tonight's meeting morphed from a .NET meeting into a Windows Phone 7 meeting. The conversations were interesting, but as I am not a Windows Phone developer, not particularly pertinent to my work.

The meeting was held at the Baltimore Emerging Technology Center, which was part of the appeal of the meeting. They have had meetings outside of Baltimore, and I find it easier to attend meetings in the city. And the ETC is a nice space.

I had some good conversations with the folks at the meeting.

Tuesday, November 16, 2010

Linux is not Windows

I attended the BaltoLUG Linux User Group meeting tonight. They had no speaker, but an open discussion about Linux distributions and the features that people liked or wanted in distros. It was a lively discussion with various tangent ramblings.

One idea that came out of the meeting was: Linux is not Windows. More specifically, lots of corporate support sites know how to fix problems with Windows installations and only Windows installations. They are set up with prepared scripts and Windows-specific training; they cannot handle problems for Linux systems, or Mac systems.

Using Linux, as a person, means that you are a pioneer. Linux requires a sense of rugged self-sufficiency, where you can handle just about any problem. Some corporations can handle Linux and Mac issues, but many cannot. (And expecting them to handle Linux is unrealistic, given the demographics of the market.)

Not a happy thought, but a realistic one.

Thursday, November 11, 2010

Networking and co-working facility in Baltimore

Today is Veteran's Day, or as some call it, Armistice Day. A holiday for the federal government, and therefore a free day for me. I did what I could with it.

I lunched with former co-workers today. Yummy burgers! And speculation on the future of tech. We all agree that Java needs a "2.0" version, something that moves Java up from its current level to compete with .NET. We consider the current Java (and its JVM and libraries) as "Java 1.0". It showed that the virtual processor was feasible, and usable, and even profitable. Microsoft .NET is a competent product and Sun (now Oracle) must step up and improve Java to match .NET.

I also visited the Baltimore Beehive, a co-working location. For a modest fee, one gains access to the facility, which is a large room with tables, chairs, monitors, power, and network access. I see several advantages to working in the Beehive over working at home: fewer distractions, better furniture, a collection of books, and interaction with other folks at the facility.


Tuesday, November 9, 2010

Stevens in Washington DC

I attended a gathering at Stevens tonight. Not in Hoboken, but in Washington.

The gathering was to recognize the role Stevens has played and continues to play in educating and supporting veterans. The crowd was a good size, with perhaps 30 alumni attending (and possibly that many staff).

Stevens in Washington is not a full stand-alone campus. It is space in the Ronald Reagan building. They have one classroom and some offices. This is the new "urban school" model, which uses office space but doesn't require the acres of greensward for frisbee games. For graduate programs, I think this model makes sense.

I chatted with a bunch of folks, including my old class-mate Bill Accardi.

Thursday, November 4, 2010

An orchestra in a box, and smart people

I attended a meeting at Loyola College this evening. I had heard about it through a random e-mail, and had it scheduled on my calendar.

The presentation was on "Simfonia", a PC-based music synthesizer. It's an impressive package, given the challenges of creating music that sounds real. It can synthesize music for many different instruments, and the authors are working on the problem of "liveness" to music. (They know the desired effect and the problems, but the solutions are difficult.)

The meeting was a pleasant experience. The folks were *smart*. Within five minutes, we talked about synthesizing music, the Turing Test, and rounding issues. It's nice to chat with knowledgeable people!

Wednesday, November 3, 2010

Professional meeting, Microsoft, and networking

I attended the CMAP meeting last night. I chatted with some folks before the meeting started, and listened to the presentation on Microsoft's "Entity Framework 4". (The previous version was Entity Framework 1"; Microsoft changed to version 4 to match the version number of C# and .NET Framework.

The presentation went at a slow pace, and gave me some time to think about the Baltimore Beehive, the local co-working location. If I get the opportunity to work there (say, one day every other week) I would need a PC. My trusty IBM ThinkPad just died, so it won't be that one. I could replace is, or I could use the Apple MacBook. Or, I could get a tablet, although I think a computer with a keyboard would be better for coding.

A day at the Beehive would also give me the opportunity to network with other folks, perhaps at lunchtime meetings.

Wednesday, October 27, 2010

Two more steps towards functional programming

I took two steps towards functional programming today.

First, I read a bit of "Pearls of Functional Algorithm Design", a book of essays on functional programming and modelled after Jon Bentley's "Programming Pearls" book. "PFAD" is written with examples in Haskell, and I have enough knowledge to follow the text but not enough for full comprehension. I'm not worried; my early experiences with object-oriented programming were similar. I plan to read up on Haskell a bit more, continue reading the text, and if I can find the time write some examples in Haskell myself.

Second, I continued my re-design of a program at the office, converting it to a modest form of functional programming. A very modest form. I call it "constructor programming", and attempt to push all code (well, all non-trivial code) into the constructor for objects. Using this techniques, I can use plain object-oriented languages (like C++ and C#) yet still get a feel for functional programming. (Or what I currently thing of as functional programming.)

The re-design work was helpful. It forced me to think about the classes and the true problem. I ended with a cleaner design, or so I like to think -- I have no measurements for complexity. (Come to think of it, I do have a utility to measure class interface complexity, and perhaps I should use that to measure my progress.)

I'm happy with the direction the code is taking. The code is clean and easy to understand. I think we could explain it to the business users and they would understand it. This is a big win, as the current system is quite opaque.


Wednesday, October 20, 2010

Inching towards functional programming

My recent work has been in object-oriented programming, but I'm interested in functional programming. I've read about it, talked with people about it, and I think it has a lot of promise. But working on it is tricky. The "regular work" during the day is in object-oriented programming, due to the tools, existing code, and talent of the rest of the team.

Despite the limitations, I can take some aspects of functional programming and use them in plain object-oriented code. Today I worked on the aspect of immutable objects. I've been designing (and re-designing) some classes, and I have been designing them to use immutable objects. The exercise has been enlightening, and I have gotten a better design out of it too.

So my recent efforts have been, effectively, to gradually transition from object-oriented programming to functional programming.

I followed a similar strategy when I moved from procedural to object-oriented programming. At the time, I was working in C. The client's shop was heavily regulated (by internal folks, not the government) and the corporate standards group strictly controlled the software installed on their PCs. (They still strictly govern software.) This was also prior to the great revolution of the internet, the web, and the open source movement. Acquiring software was harder (and more expensive) in those days.

I started my move from procedural code to object-oriented code by doing three things: reading about it, attending conferences, and programming in the style of object-oriented programming with the tools we were allowed to use. I didn't create anything as elaborate as the cfront compiler. Instead, I designed function calls that worked like object-oriented code. I called it "thing-oriented programming", and it was somewhere in between procedural programming and true object-oriented programming. Like today's efforts, it was enlightening and gave me better designs.

I'm excited about the future of functional programming. And with the internet and open source efforts, I should be able to try some real functional languages at home. The biggest challenge there will be time.



Tuesday, October 19, 2010

With vim

Attended the monthly BaltoLUG meeting tonight. It was easy; the new location is downtown, about ten blocks away.

The meeting was held at the Baltimore Node, a hacker space. This was my first visit. The space is a large workroom with various bits of electronics on tables, stored in bins, and tacked onto the walls. Not elegant but very practical.

The presentation was on VIM, the improved VI editor that is shipped with just about every distro of Linux. It's a powerful editor, and reminds me of TECO on the DECsystem-10. It can handle regular expressions and it has a macro-programming tool.


Monday, October 18, 2010

Hardware fun

I did some hardware work tonight.

I replaced a failing DVD reader on //ophelia (a Dell GX280 that was gifted to me). It came with a CD reader, and I had installed an old used DVD reader. That reader worked for a while, but recently failed. The failure was physical; there was something preventing the drive tray from opening.

A quick stop by the Radio Shack web page got a new DVD reader on its way to me. (Although there was only the one selection for internal DVD readers. I suspect that DVD readers will soon disappear, replaced by USB drives and network connections.)

With the new DVD reader installed, I was able to upgrade the operating system to SuSE 11.3. The upgrade went smoothly, and I like the look of the new KDE desktop.


Sunday, October 17, 2010

Open source in Pennsylvania

This Saturday I attended CPOSC, the Central Pennsylvania Open Source Conference.

CPOSC is different from most conferences. It's small, with a maximum of 150 attendees. It's run by volunteers. It lacks the pizazz and glitz of the big conferences. It focusses on open source software, avoiding proprietary and commercial solutions.

Despite the unusual approach, it has interesting sessions and good speakers, on par with those at the larger conferences. I attended several sessions and they were just as good as sessions at the Microsoft or O'Reilly conferences.

The sessions I attended included:

- A review of Java, the Tomcat web server, and the Eclipse IDE and how they work together.
- "One-line" programs that do useful things in Perl
- Javasript and how it can be used effectively
- A review of the latest version of Wordpress, which is growing from a blogging framework into a content management system.
- A summary of new features in Rails version 3
- Writing internet applications in Python using the "Twisted" framework. This framework is much easier than the Java RMI or the Microsoft .NET solutions for internet client/server applications.
- A list of collaboration tools for organizing and managing projects with distributed teams

The schedule is too fast to learn any topic in depth. Sessions run for fifty minutes, which is too short to discuss deep technical issues. Effective participants use the sessions as highlights for solutions, noting promising ideas and investigating them later.

CPOSC is targeted to practitioners. The speakers and attendees are practitioners, and can interact and share knowledge. It is a conference run by geeks for geeks. Managers (non-geek managers) may not want to attend, but they should consider sending their teams.

This was the third CPOSC; a fourth is planned for sometime late in 2011. I'm already pencilling it onto my schedule.

Wednesday, October 13, 2010

Meetings and tech - single packet authentication

I attended the CALUG (Columbia Area Linux Users Group) meeting tonight. The presentation was on Single Packet Authentication (SPA), a successor to port-knocking for granting access to selected users and hiding open ports from other users.

A somewhat scary aspect of the meeting was the fact that I could keep up with the discussion. I'm not much of a network geek (programming is my thing) yet at this meeting I quickly understood the concepts of port-knocking and SPA. Some folks knew it better than I, yet a bunch of folks asked questions that I felt I could answer. So I must be learning something about network configurations!


Tuesday, October 12, 2010

Thinking before acting

I've been working on a prototype of the next version of a finance model for the office. The old model is in C++ and the new model will be in C# or Java.

The prototype has advanced to the point that we need unit tests and system tests. We've been building system tests as we built the prototype, so they are in place. We now have a need for unit tests, or more specifically, class-specific tests. We need these tests to exercise features of classes before they are integrated into the system.

Our framework (a home-grown thing) is capable of such tests, and we can add them with a little effort. The unit tests will give us finer resolution on changes and faster feedback on changes. We're keeping the system tests; they exercise specific business cases and connect us to the users.


Monday, October 11, 2010

At the last CMAP meeting, I picked up a DVD with Microsoft's Visual Studio on it. The DVD has a timed edition, good for 90 days.

So far, I haven't installed it.

I will install it; I want to set up a time to use it. This week is busy for me. Installing the software now would simply mean throwing away one or two precious weeks of available time. Better to wait for a better schedule.

I'm thinking that this is not good for Microsoft. If you want people to try software, you want them to install the software and try it, not delay and worry about maximizing the demo time. Any time that is spent not running the software is... well, not running the software.

Getting back to me...

Perhaps I will install it next week. My week-days are busy, and the long commute takes a good chunk out of the day. Week-ends are better for experiments. This Saturday I will be out of town, attending the Central Pennsylvania Open Source Conference, so Sunday is the earliest day that makes sense.


Tuesday, October 5, 2010

The .NET meeting gives me a book on... Ruby

I attended the CMAP (Central Maryland Association of .NET Professionals) tonight. The presentation was on jQuery, which isn't a .NET technology nor even a Microsoft thing. And while I didn't win anything during the raffle I did pick up Thomas and Hunt's "Programming Ruby" (also not a .NET or Microsoft thing).

I did talk we a few other folks. My friend Ben is considering a position in Washington, but is not happy with the commute. (And it *is* a long commute.)

Friday, October 1, 2010

To list on the resume or not

I've been working with Microsoft VBA for Excel. The biggest benefit of this experience has been to refresh myself on the reasons for leaving VBA and using other languages. VBA is this horrid mutation of Visual Basic that loosens data typing, while retaining the frustrating VB6 editor that tries to be smart about syntax. The run-time engine has vague messages that amount to liitle more than "something went wrong".

The question before me is: do I add VBA to my resume? On the positive side, it shows that I have experience with different languages and platforms. On the negative side, someone might actually want me to work on VBA applications.

So perhaps I will, with the condition that jobs that ask for VBA skills get the "special" hourly rate.


Thursday, September 30, 2010

Ignite Baltimore

I attended the Ignite Baltimore presentations tonight. (Finally!) They were quite good, and my friend Heather H. gave a presentation! I also chatted with a few folks and re-connected with Peter from the old Java user Group days.

Sunday, September 26, 2010

Book fair and lunch

I attended the local book fair this week-end. This is an annual street fair, with booksellers, authors hawking books, writers organizations, the local library, local museums, authors reading their books, some stages with musicians, and a set of food carts (complete with unhealthy food). The weather was warm and sunny and the fair was well-attended. I bumped into one person with a "I see dead code" T-shirt. We chatted briefly.

Today I lunched with a former co-worker. We talked about a number of things from technology to politics. We both agree that the economy is improving, albeit slowly. We also discussed offshore IT contract work, and can see some benefits and some costs. The costs seem to be harder to measure, so the easy-to-measure lower salaries tend to outweigh the decline in quality. Savvy development managers will see the problems and get a full understanding; inexperienced managers will miss the problems and eventually wonder at the later production problems and customer incidents. A bleak scenario indeed!


Tuesday, September 21, 2010

Linux and LISP

I attended a meeting of BaltoLUG tonight. They are the Baltimore area Linux User Group. The presentation was on LISP.

I like the BaltoLUG meetings. The conversations take unusual turns and go to unexpected places. Beyond the basic LISP talk, we discussed the FORTH language, the Python language, Peter Norvig and Google, tail recursion, and possible meeting places. Our next meeting will be at the Baltimore Node hackerspace. (A location much more convenient for me.)


Saturday, September 18, 2010

Subversion

I set up a Subversion server this week. This was an experiment at home to review the steps (and effort) for Subversion. We may be using Subversion in the office, and familiarity with it will help me in my job.

The installation and configuration went well. Much better than the first time, which was over a year ago. Perhaps I am more comfortable with the concepts of client and server, perhaps it was just plain remembering what I did last time. I did consult my notes, and record my steps.


Wednesday, September 15, 2010

Experiments with C# and .NET

I've been working with C# and .NET this week. The project is a conversion of the current C++ software to C# and the .NET framework. The first step is a prototype to prove that basic functionality is possible. (And so far, it is.)

I'm actually happy to be working with C#. It's an easier language (and environment) than Visual Studio's C++ side, with better auto-completion and better debugging options. Microsoft has done a good job with C++, but a more complete job with C#.

I won't compare C# to Java, as I have not used Java or Eclipse for quite some time. (Note to self: try out the latest Eclipse.) Comparing C# to C++ finds me at least fifty percent more effective with the former. While C#/.NET is a verbose and long-name environment, Visual Studio helps with most of it. (Although sometimes the auto-completions guesses wrong, which is frustrating.)


Tuesday, September 14, 2010

A short lunch to discuss business

I had lunch with Eric from Apptis today. Eric is the account executive for my contract. (After almost a year, we finally meet!) We had a good conversation about the state of the contract, its renewal, and some organizational changes. In sum: nothing that affects me, and it looks like we will have a contract for another 12 months. (Woo-hoo!)

One item he mentioned was Ruby on Rails, which is missing from my skill set. I have wanted to get invovled with RoR for some time; today's conversation gives me another push. The question is: how to fit it into my schedule? Evenings? Week-ends? It will have to be one of the two.


Wednesday, September 8, 2010

Another geek meeting

I attended the CALUG meeting tonight. It made an interesting contrast to the CMAP meeting of last night. The CMAP meetings are a bit more organized (structured) and presentations are a bit more formal. CALUG tends towards the informal end of the spectrum.

CMAP and CALUG both offer attendees free pizza and soda. I find CALUG's pizza a bit more digestible, but that may simply be due to different vendors. (CMAP uses Papa John and CALUG uses a local shop.)

I find the folks at CALUG more approachable. It's easier to start conversations. The conversations are generally deeper and more meaningful. Folks at CALUG focus on the art of programming (or sysadmin-ing, or management) and folks at CMAP focus more on careers. Well, the 'P' in CMAP stands for 'professionals' so perhaps a professional is one who keeps his career in mind.

The CALUG meeting had a presentation on GNU Screens and Byobu. GNU Screens lets you multiplex a terminal session, and byobu lets you manage the configuration. A "repeat" talk, but still interesting.

More interesting were the side conversations. One with a Tenable employee about talent, management, and angry drivers. We also chatted about Agile Development techniques and their advantages over "Big Design Up Front". Another conversation was with some DP veterans (I say "DP" because in their day that's what it was called) about the Ada programming language, DVD players, lucky finds, and early personal computers such as the Commodore PET and the Radio Shack TRS-80 model I.


Tuesday, September 7, 2010

Local group, global software from Microsoft

I attended a meeting of CMAP (Central Maryland .NET Professionals) tonight. They had a presentation on Microsoft's Managed Extensibility Framework (MEF).

First impressions are that MEF isn't fully baked. It's a collection of add-in classes to C# (not sure that it works with any other .NET language) that lets you build components and connect to them at run-time. In many ways, it is like the concept of a DLL for the initial version of Windows.

Like any technology, it has its upsides and downsides. First the downs:

MEF imposes a fair amount of work on the user; it bleeds into your application, requiring you to code a bit of plumbing. It allows you to mix the plumbing and your business logic; there is nothing to help you separate them. It relies on attributes, those square-bracketed phrases which pollute your code. The code you add is verbose, with long namespace names and long class names.

On the upside, it lets you build applications that bind to objects at runtime. Nifty, yet I'm not sure that this is such a good thing. It seems to have the same problems as the early implementation of DLLs. They, too, let your application bind to objects at runtime. But the incompatibilities between versions lead to many broken applications and the eventual coining of the phrase "DLL hell".


Sunday, August 29, 2010

Books on tech

A visit to the Book Thing yielded some interesting books. (The Book Thing gives away books, mostly used books from different vintages. You can find them here: http://www.bookthing.org)

I came home with a several recent books and a couple of "early vintage" books. The recent ones include:

"Mac OS X Hacks"
"Python Cookbook"
"lex and yacc"
"Practical C++ Programming"

These are all from O'Reilly, the well-known and well-respected publisher.

The early vintage books (from the pre-PC era, in my mind) include:

"The Joy of Minis and Micros" by Stein and Shapiro
"Human Performance Engineering" by Robert W. Bailey
"Programming Business Computers" by McCracken, Weiss, and Lee

"Joy" and "HPE" are from 1981 and 1982, so possibly not strictly from the pre-IBM PC era, but neither talk about IBM PCs, MS-DOS, Windows, or the internet. Indeed, "HPE" is more about psychology than technology.

"Programming Business Computers" is a very nice text and I picked it up after I saw the name "McCracken" on the spine. I have a few books by him on Fortran and I find his books both informative and readable. (More readable than many of the O'Reilly books.) I guess that is known as star power -- I picked up the book because of the author, not the content. (I find the content informative, although a bit dated. Yet I enjoy the reading of the book. Reading tech books as literature?)


Wednesday, August 25, 2010

Networking

I attended the "Innovate Baltimore" social this evening. It was held at Red Maple, a bar/club that is within walking distance of my apartment. (How could I not attend?)

The event saw a fair number of people (perhaps fifty?) from various tech areas of Baltimore. There were graphics designers, startup entrepreneurs, and even folks providing a physics engine for game deverlopers. I talked with a number of folks (no, not all fifty of them) and even some folks from UPS.

It's nice to see these events here in Baltimore. We need a group to pull together the startups and small technical companies.

Friday, August 20, 2010

Less is more

I accomplished a lot today, although the numbers would indicate otherwise. In truth, it really depends on the numbers you choose to examine.

I made some code changes that fixed some defects. The number of changes and the number of lines of code changed were small. I changed perhaps thirty lines of code across five modules. A manager using lines-of-code or lines-of-code-changed metrics would say that I did very little this day.

The number of defects fixed was small. You could say that it was really only one defect, which expressed itself in several scenarios. So the number-of-defects metric also indicates that I did very little today.

Yet my changes transformed the program from not-useful to useful. The defects prevented us from using the program -- the calculations were wrong -- and my changes fixed the defect. So the metric of "can we use it" went from "no" to "yes", which means a lot to people.

It's not how many lines of code one writes, its which lines of code.

Tuesday, August 17, 2010

My first presentation

I gave a presentation to the local Baltimore Linux User Group tonight. I think this counts as my first (public) presentation. I've given presentations for employers, so this really isn't my first presentation. But one must start somewhere!

(The presentation was about cloud computing,)

Sunday, August 15, 2010

Letting them vent

Today I had lunch with two former co-workers. It was a good lunch, although it was a bit of a venting session for them. They are in tough but not impossible situations, with problems that range from staffing to managerial politics. I let them vent, and then we had some interesting conversations about outsourcing, on-shoring, talent management and retention, and staff morale.

Earlier this week-end I gained some hardware experience. I installed a second hard drive in a PC, and not only did I learn about drive configuration I also learned about Linux capabilities for partition management and file system mount points. Perhaps not something that belongs on my resume, but it is nice to know that I can still learn!

Wednesday, August 11, 2010

More success and a Linux meeting

At the office, I had more success with C++ and spreadsheets. Today I integrated more code into the existing spreadsheet libraries and now they can write an XLSX file. (One that Microsoft Excel opens and reads without complaint.)

After work, I attended the CALUG meeting. This is the mostly-informal group that meets once per month. Tonight saw no speaker so we talked among outselves, and sometimes about Linux!


Monday, August 9, 2010

Some modest progress

I made some progress for the client today. (Well, I like to think that I make progress every day, but today's was worth noting.)

Today I successfully integrated the new XLSX libraries into the existing system and read a Microsoft Excel XLSX spreadsheet. This is a big win for the client, who has wanted this functionality for a long time. (I've been building test cases and support scaffolding, so new features were deferred.)

I also integrated the routines to write an XLSX spreadsheet, and they work... sort of. They create an empty spreadsheet that Microsoft Excel accepts with complaints. I think it is due to the lack of content. I expect to fix that problem tomorrow.

But for a few moments, a little rejoicing. Yay! for me!


Saturday, August 7, 2010

Stuck in my head

I solved most of a problem yesterday. Almost. I implemented three-quarters of the problem in the last thirty minutes of the day. I need ten more minutes for coding and another ten for testing. (THe test framework is already in place.

The problem was in the reading of Microsoft Excel OOXML spreadsheet files. The OOXML files are much easier to read than the previous BIFF files, with data stored in XML and therefore as text. The one aspect I was not expecting was E notation for some numbers, and my existing parser considered these values as text, not numbers.

The solution is fairly simple: change my parser. I found a state machine description for validating a number (including E notation) in Cay Horstmann's "Practical Object-Oriented Programming in C++ and Java". I was able to convert his description into code. All that remains is to run the tests.

I would like to leave my work at the office, yet this problem is stuck in my head. I keep thinking about it. I know that I cannot simply forget it, I must find something else to think about.


Tuesday, August 3, 2010

Microsoft's Reactive Extensions

I attended the CMAP (Central Maryland Association of .NET Professionals) meeting. They had a presentation on Microsoft's "Reactive Extensions", a poorly-named package of classes to handle interrupt-driven programming. I also chatted with a few folks who I see only at the CMAP meetings.

The Reactive Extensions package is an interesting addition to the Microsoft toolset. They built it for .NET and also for Javascript, so there are two versions. Microsoft claims that they have the same capabilities. (If they do, then that says something about Javascript. If they don't then that says something about Microsoft.)

Microsoft makes a big deal about this package solving the programming problems of inter-system non-synchronized communication. And it does solve the problem, although Microsoft seems to have forgotten that this solution was available at least thirty years ago before PC-DOS 1.0. The package is a simple variant on event-driven programming, or even interrupt-driven programming, in which a process is interrupted by a second process. Or you can view the solution as a variant on the Observer pattern (which Microsoft does).

My impression is that it is useful, but a but clumsy in implementation. The classes do the job, but I'm not sure classes are needed. A simple interface may be sufficient. And it seems that Microsoft has packed in some extra features (throttling methods and "if" methods) that seem to belong to a different API.

This and other recent innovations in programming tools (Silverlight, Internet Explorer 9, IronPython, IronRuby, etc.) show that Microsoft feels competitive pressure. This is a good thing; Microsoft will provide better products for the market and everyone will gain.

I think the Reactive Extensions will be a big factor in development. It allows for the extension of an event-driven framework. This kind of extension will allow for a coherent set of changes to events and event processing. It may take some time, but the changes in application design will be significant.

I'm not sure about the name, though. "Reactive Extensions" sounds a bit dorky to me. But if my biggest complaint is with the name, then Microsoft is on the right path.


Sunday, August 1, 2010

The PC puzzle

I assembled the parts to my found PC. This is the Dell GX280 that I found a few weeks ago. It had no memory and no hard disc but was in good shape -- no signs of abuse. After installing all of the parts (which went without problems), I tried to attach it to my KVM infrastructure. I use KVM switches to reduce the number of keyboards and monitors.

The GX280 has no PS/2 ports. Apparently it was an all-USB machine. My KVM infrastructure is based on VGA and PS/2, so now I need a few more adapters. I ordered them today, after not finding them at the local Radio Shack.

In other news, I dropped a few bookmarks. The big one was InformationWeek. I had it in my "weekly favorites" list, something to read once a week. I have a number of web sites there, and I use them to keep up with current events in tech. I dropped InformationWeek since I've been unimpressed (and uninterested) in their stories. Not just this past week, but for the past several months. I get news from other sites, and Information Week not only didn't have repeats of the stories from other web sites, their stories were boring and irrelevant. Such is the fate of some web sites.


Friday, July 30, 2010

What is a PS/2?

I assembled the pieces of my latest found PC tonight. I found the PC (a Dell GX280) a few weeks ago, without memory or disk. I ordered the missing items including the special drive mounting rails. They all arrived prior to my trip to OSCON, but I did not have time to work on it until now.

Things went together, although not smoothly. The drive rails have these little rubber guards on them, and you have to press hard to screw in the mounting screws. The drive itself had a few issues sliding into the bay.

The memory went in with no problems.

I even had a spare KVM cable set and open port on my PS/2 KVM switch. I plugged in the video and then tried to attach the keyboard cable.

There is no keyboard port on a GX280. More specifically, the Dell GX280 has no PS/2 keyboard and mouse ports. I guess everything was attached through USB ports. I'll need to acquire the necessary adapters.

Hooking up a temporary USB keyboard and mouse, the unit refuses to boot the SuSE Linux DVD. It boots the Ubuntu Linux CD. I'll check the drive; it might be a CD reader and not a DVD reader. (Do I have a spare DVD reader?)

So close. A little more work, and I will be ready.


Wednesday, July 28, 2010

C++, STL, and brick walls

Two problems in one program today. Problem one: the program crashes on certain input files. Problem two: the program runs slowly.

I spent a good part of today tracking the crash problem in the spreadsheet libraries. The symptoms were easy to describe: the program works for small spreadsheets but not for large ones, where "large" is a number of about 200 MB. The cause was the use of "sizeof()" when I should have used "_countof()". (A subtlety of the Windows API for wide characters.) The investigation took much longer than I think it should have.

On the other hand, I did find the problem. The true problem. I made a correction that solves the problem and doesn't simply mask it. Go me!

The second problem is a harder one. The performance of C++ and STL is disappointing. It is much slower than I would like. It takes about four minutes to load a file, where the old libraries can load it in less than thirty seconds. The old libraries don't use STL; they use custom structures full of C++ pointers and other opaque techniques. The code is impenetrable and we don't understand how it all works.

The new code is understandable but slow.

Are these the only two possibilities? We want fast code that is maintainable. I need to think more about this problem.


Tuesday, July 20, 2010

OSCON 2010 day 2

More good sessions today. I split each of the tutorials in two, attending the first half of one and then attending the second half of the second. By jumping between sessions, I got to see four and not two.

The morning sessions covered mobile web app development and performance improvements for Python. The mobile web app session was a bit disorganized. It was clear that the presenters had not practiced the talk before the show. But despite the problems, I learned a lot. The key thing I took away was that development for mobile web apps is fragmented along hardware lines. The big platforms (iPhone, Android, Blackberry, Meego, and Windows) all have their tool suites and on over-arching tool is quite challenging. For now, building a web app for multiple platforms means building multiple web apps.

The afternoon sessions talked about Grails (Java's Groovy on Rails) and Smalltalk's Seaside web development environment. I was impressed with both. Grails is a not a direct port of Ruby on Rails but it is fairly close. Anyone familiar with RoR will pick it up quickly. Seaside is a different approach to web apps, in which small modules generate fragments of HTML pages and you combine the modules to build an application. It is an interesting take on web apps, one that is not like the JSP/ASP pages of yore or the Rails concept of today.

Tonight will see the OSCON Ignite talks and the O'Reilly Open Source Awards. And then maybe some sleep!

Monday, July 19, 2010

OSCON 2010 day 1

I attended two sessions today. Today's sessions are the longer tutorial format, so we get only two (unless we jump out of one and into another).

The first was "QA in PHP", which covered a lot of quality concepts. It included testing (automated testing, since OSCON is a conference on modern techniques) and more. Beyond testing, the speaker covered code complexity (from lines of code to complexity). I was impressed with the number of tools (and the variety of tools) available for PHP programs.

The second session was "Moose is Perl", which covered the object-oriented extensions for Perl 5. The package allows for mixins (like in Ruby) to provide multiple and targeted inheritance. I've stayed away from O-O programming in Perl because the techniques I've seen are ugly. Moose makes things palatable -- at least to my mind.

I've also had a number of good conversations. They started at breakfast, continued during the morning break, then at lunch, and at the afternoon break. It may seem I talk with people only while eating, but that's when we're not in class.

Wednesday, July 14, 2010

Plugging the leak and talking to strangers

I tracked down a memory overwrite issue today. These are still difficult problems to solve. For this one, I was lucky: the problem was repeatable, and the code had diagnostic routines that would trip the Microsoft run-time checks.

Solving a problem in software consists of the following steps:

1) Make the problem repeatable
2) Identify the point of failure
3) Understand the problem
4) Devise solutions
5) Pick a solution and implement it
6) Run tests to verify that you did not break something else

With step one as a "freebie" and diagnostic routines, identifying the point of failure was easy. I found it in less than an hour, pin-pointing the exact line of code.

Understanding the problem took more work, and in fact required the experience of a colleague, someone who has worked on the software for a while. Together we built an understanding: me saying "this line of code does bad things" and him saying "that line of code should not be run, nor should its function be called (with this particular data)". With that information, I could track upwards in the call stack to find the true problem.

It is a pretty good feeling, solving this kind of problem.

* * * *

On the homeward commute, I got out the "Starting FORTH" book, and the guy across the aisle on the train commented on the book. We talked for the entire trip home, discussing programming, technology, project management, and society in general.

I hadn't thought of Lee Brodie's book as a conversation-starter, but it was.

Sunday, July 11, 2010

The $100 PC -- part 1

Last week, I found an abandoned Dell GX280 PC. It was in the "giving place", that small area in my apartment building behind the elevators which holds junk and items to be thrown into the local landfill.

The PC was in pretty good shape. The case had a few scuffs but overall the case and components showed no signs of abuse. The previous owner had removed the hard drive and memory, but had left the CD drive, the floppy drive, and the CPU and fan assembly.

I can use this PC. I can install Linux and use it as a server, hosting a database, version control, and things for Ruby on Rails.

The PC was free (it was in the giving place, after all). The missing components (memory, hard disc, and rails to mount the hard disc) cost just about $100. I ordered them yesterday, using the power of the internet. They should arrive later this week.


Wednesday, July 7, 2010

When credit is due

I've spent some time during the past two days developing a C#/.NET application. And the experience has been, mostly, a good one.

The program is perhaps more of a utility that an application. It reads some text data and presents it in a GUI. The data itself represents the changes between two spreadsheets. The 'diffs', if you will.

The one frustrating aspect was the C# language. After working in Perl and Ruby (and even C++), I found C# to be a bit of a straightjacket, due to its strong typing. At multiple points in the code, I had to insert extra lines (or what I consider extra lines) to convert data from one type to another. The only reason I can see is the type checking in the compiler. Yet Perl and Ruby can perform the same operations without the extra code.

Languages are about expression, and I find myself more comfortable with Perl and Ruby. But that's beside the point.

We're using C#/.NET for the GUI portion, and here the language and framework perform well. I was able to create the GUI using standard controls (this is a Windows application, so no Silverlight or ASP magic was needed) and present the data. I spent more time on organizing the data in the program and less time fiddling with the user interface. And that's what I expect of a good framework and IDE.

So I give Microsoft credit. Visual Studio, C#, and .NET made it possible to deliver the goods.


Tuesday, July 6, 2010

Designed for big

I attended the local CMAP (Central Maryland Association of .NET Professionals) this evening. I attended despite the efforts of the WMATA metro and the BAL MTA MARC trains, both of which had delays.

The CMAP meetings start at 6:30 but there is usually some preliminary announcements and then possibly a few minutes for a sponsor. I arrived late, but in time for the main speaker. I was disappointed that I had missed the early business/network informal chats.

The talk was about Entity Framework, an ORM for .NET. The speaker knew the material well but was an average speaker, rushing though some examples and not repeating questions from the audience. I learned a lot about the ORM, which says something.

The one thing that sticks in my mind is the work needed for this ORM. The speaker made it sound like Entity Framework is an improvement over other ORMs (and possibly it is -- I am not familiar with them) but it still seemed like a lot of work to configure and run a simple application. I might be biased for Ruby on Rails, which does things for you.

The other impression I have is that Entity Frameworks, like other .NET things, is built for large-scale projects. It's designed to fit into the enterprise application life cycle, not for a one-person shop building low-cost quick-and-dirty apps.

It turns out that I didn't need to worry about missing the business/network session. I talked with two folks after the meeting. One is a former co-worker, the other a person I know from CMAP. Both are struggling with the challenge of independent work, asked questions, and also gave advice.

I'm glad that I went.


Monday, July 5, 2010

Fun with Cocoa

I experimented with the Apple development tools today.  Even my old OSX 10.4 system included the Interface Builder tool for creating applications. And with the O'Reilly book "Building Cocoa Applications", it was easy.

I did less than I wanted, due to the sinus headache of this morning. Yet headache and all I felt productive and the tasks were fun!


Saturday, July 3, 2010

Windows 7 - the morning after

Windows 7 has an update hang-over on the day after the install. Today was the second day that I used Windows 7 on the Systemax PC, and as Windows booted I was greeted to a flurry of "applying update X of Y" messages. The 'Y' in the messages was the total number of update operations, and it was a whopping 33,769.

Yes, that is thirty-three thousand operations. The entire set was completed in less that five minutes, so I expect that an "operation" may be as simple as changing a single registry entry.

Why Microsoft was compelled to display such messages and organize operations at such a granular level is open to speculation. Perhaps they have seen problems with updates crashing, so the last update operation would be displayed and useful for diagnoses. It is a thought that inspires little confidence.

On the plus side, Windows 7 the "real" edition knows how to talk to my mouse through the KVM switch. The eval copy of Windows 7 did not, forcing me to use a separate mouse attached directly to the PC. Oddly, Windows 7 *would* talk to the keyboard through the KVM switch. (And Linux would talk to both keyboard and mouse through the KVM switch, so I knew it was not a hardware problem.)


Wednesday, June 30, 2010

Windows 7 setup disappoints

I installed Windows 7 this evening. My first impression is disappointment. Microsoft has improved the Windows install program (compared to the old setup programs for Windows 2000 and Windows XP) yet still lags behind the typical Linux install program.

On the plus side, the Windows 7 setup program runs smoothly and recognizes hardware beyond the basic VGA (640x480) video cards. It found drivers for my video card and network card, something that earlier versions of the Windows install did not do. It runs in GUI mode, not in text mode. The old white-on-blue text scheme is gone!

That about sums up the good things.

Windows 7 installs from a DVD, not a CD. Perhaps this is Microsoft's way of ensuring a recent vintage PC. Yet all of the Linux distros that I have tried use CDs, even the bloat-ish SuSE distro. Did Microsoft think carefully about this aspect? Demanding a DVD reader forfeits all of the old PCs to Linux. It may make the Windows install easier, but it gives up market share. (I myself have several PCs with CD-not-DVD readers.)

Windows 7 offers no "Live CD" version, to let you run off the CD (or DVD) without installing on your hard disk. This is a convenient and useful feature of an operating system -- you can verify that it will work with your devices before you commit to the install. With Windows 7, you must commit and hope for the best. (Although Windows seems to have drivers for various cards.)

The install lets you select a partition, which is a nice change from the previous approach of "we are using your hard disk". Yet it gives you no option to adjust the partitions. You must (apparently) use a third-party disk partitioner (perhaps from a Linux Live CD) and then install Windows 7. (On the other hand, by not adjusting partitions, Windows 7 does not have to worry about adjusting file systems including those "foreign" file systems like ext3 or ext4.)

Windows 7, if it finds an old Windows system, keeps the files in the directory "Windows.old". There is no "take over the entire disk I don't care what's there" option.

The Windows 7 install requires three restarts to complete. (OK, one of them is at the very end after Windows has downloaded updates.) Yet the typical Linux install requires zero restarts. (SuSE does need one, though.)

The setup is divided poorly into a number of tasks: copying files, expanding files, installing features, installing updates, and then a generic "completing installation". This last vague tasks takes the longest amount of time (almost an hour) with no progress indication.

Windows, being a Microsoft product, requires activation. The install disc comes with a product code, on a sticker. The typeface is rather small and hard to read. Perhaps easier for younger eyes, this demonstrates a bias against older users.

The setup asks some questions up front and then some at the end. This arrangement is better than earlier installs which asked questions at mulitple points of the install, but it could be better. Why not ask all questions up front and then perform all operations?

In the end, the setup program does get the job done. Windows 7 is now up and running. From a purely pragmatic view, the setup program works and performs as expected. From a marketing view,  the program fails: it does nothing to endear me to Windows or Microsoft.It has fewer capabilities than competing Linux, the user interface is acceptable (and possibly cutting edge by Microsoft standards) but it does not delight.

I use the setup program once and then I use Windows. Let's ignore the shortcomings of the install program and look at Windows 7. I'll report on my findings in the future.


Sunday, June 27, 2010

Photos and e-readers

A friend helped me with photos today. She's a photographer, and we held a photoshoot at the local park.

I asked her for help because I want new photos (and better photos) for my web presence. My current photos are simple against-the-wall headshots that were taken by my camera with a timer. In a word, they suck. Today's photos are much better (from the little that I have seen) and I hope to have them next week-end.

Today I also stopped by the local Borders and looked for the Kobo e-reader. They are not available, but may be on sale in early July. I'm hoping for a drop in the price from $149 to $99. I think that it is reasonable, as the Kindle just dropped to $189 and Nook to $149 and they both include internet connectivity (which the Kobo does not).

 

Saturday, June 26, 2010

Some things go and some things come

It may be late for spring cleaning, yet I find that it is time to let some things go and bring some new things in.

I let go of an old IBM Thinkpad T21 laptop this morning. It had several problems: would not run later versions of Ubuntu, display did not always work, and sometimes would not turn on.

I'm also letting my subscriptions to "Fast Company" and "Wired" expire. In the past, I have been a big fan of both magazines, yet now I find that they are ... boring. I have read the recent issues and find very little of interest. While the subscription rates are low, the signal-to-noise ratio is also low, and I would rather spend my time reading other things.

New arrivals include Windows 7, which I ordered today. I have been playing with the evaluation copy and that one has expired. I ordered a "real" copy from TigerDirect this morning at an appealing price -- less than their special price of a few months ago.

Other arrivals include a pair of hard discs and a flopy drive I found in the "giving place" of my apartment building. There was a PC carcass with no memory and no cover, but the drives were present. I popped them out easily. One is a 40 GB drive and the other 320 GB. I will add the larger drive to my //delwood server and use it to store music files.


Wednesday, June 23, 2010

The fun of doxygen

I experimented with doxygen today. Doxygen is the documenting package for C++ and other languages. I had heard of it some time ago and was always doing something else, never getting to it.

I was pleasantly surprised with the package. It was already installed in our environment, although the installation seems to be nothing more than copying the executable to a suitable location. No special DLL files, no COM registration, no sub-package installs, no databases, no virtual machines, and no activation keys. Just a mere "load and go" set of steps.

I was also pleasantly surprised with the capabilities. Doxygen can extract special comments and create documentation, much like Javadoc and Perldoc, and it can also chew on plain source code (without special comments) and create some reasonable documents. I used the latter mode, and got some pretty good results.

Doxygen creates documents in multiple formats. The version we have (which is an old one) creates HTML, RTF, LATEX, and MAN pages by default. (There are ways of turning off specific formats.)

Doxygen works on more than C++. It can read C, C#, Java, FORTRAN, Python, PHP, and a few other languages that I don't remember right now. It's also configurable, and configurable in sensible ways. The folks who write doxygen use it for their own work.

I'm impressed with the package. I wrote a documentation package when I was at UPS; it worked on C++ and C# and created little hierarchy diagrams. The doxygen output is superior to my attempt at a package, and faster too. Since I wrote my own documenter, I know what's involved in parsing C++ and C# code. (And parsing C++ is not easy.)

I spent all of perhaps thirty minutes with doxygen today, and feel pretty good about it. A small part of that time was learning the configuration options; most of the time was experimenting with options and showing other folks the results.


Monday, June 21, 2010

Lunch and RailsConf

I attended a brown-bag lunch session today, held in the office. Some folks who had attended the recent RailsConf shared information about the conference.

There is a lot changing in Rails, and from what I heard, it is for the better. The Rails maintainers have learned from the previous two releases and version 3.0 will have a number of significant improvements. The Rails team is not keeping backwards-compatibility, a move that I am sure caused much discussion on the team.

I was happy to attend the lunchtime session, and happy to see that the folks in the office care about new technologies and events outside of the office. It's a nice change from the UPS environment. At one time, folks at UPS did care about external events. The philosophy changed over the years, and UPS now looks inwards and thinks little of external happenings. It is an attitude that will cost them in the long run, as they fall behind in technology and they lose people to other companies. I hear reports that they already have a difficult time filling positions (contractor or employee) but I suspect that they are not too worried about such a problem.


Sunday, June 20, 2010

A step towards the white shiny side of the Force

I installed the Apple OSX development tools this morning. A while back, I had found the "Building Cocoa Applications" book and read through it quickly. The InterfaceBuilder application for OSX development is similar to Microsoft's Visual Studio for Windows development, so I understand a lot of the concepts.

Or so I think. I will test that idea in the next few weeks, as I develop applications for the Macintosh. I'm sticking to the Macintosh and not developing for the iPhone or iPad for two reasons: the "Building Cocoa Applications" book talks only of Macintosh (it was printed prior to the release of the iPhone) and iPhone development requires more tools than the development kit.

For now, I can keep busy and learn new things. And that is enough.


Sunday, June 13, 2010

Lunches and resumes

This week-end I lunched with my former co-worker Larry. He has been working long hours (7 day weeks) for a few months now, to compensate for project under-staffing. Larry is a decent guy and doesn't deserve half the politics that he suffers. Yet with the current job market, he chooses (understandably) to suffer. And he's in a position to shield the folks that work on his team, so he's doing good things for people.

Our lunches were good. It gave Larry the opportunity to vent (something that everyone needs at times) and it also let us discuss possible futures. I tend to be too much of the purist and Larry can help me with his experience and his pragmatism.

As for resumes, I forwarded a the resume of another former co-worker to a respectable recruiter. I feel a bit guilty -- I have had it for a while. I hope that she (the recruiter) can find him a good opportunity.


Wednesday, June 9, 2010

Local lightning talks from ACM

I attended the local ACM chapter's lighting talks this evening. They went off fairly well, although the chapter must work on building attendance.

Tonight was the annual meeting, and a new set of officers was elected. (Which means I escape from my role as Mmeber-at-Large.) I met a few folks, but didn't have enough time to talk.

One interesting topic was the Baltimore hackerspace. It's closer than I thought -- maybe ten blocks. Worth looking at, even if I don't have the time. (One of the talks at Open Source Bridge last week as about hacker spaces. Coincidence?)


Monday, June 7, 2010

Force of habit

I returned to the office today, after the week of science fiction and open source conventions.

And when I did, I logged on to my assigned computer. I entered my ID and password.

Not only did I enter the proper values, but I did so without hesitation.

That's a milestone. I have been stumbling on my ID and password, entering the ID and password of the previous assignment, not this one, and catching myself as I entered the old credentials from habit.

Now, I have re-programmed myself with the new credentials. It may have happened a few weeks ago, and I noticed it only now.

Still, it feels good to know that I *can* reprogram myself.


Saturday, June 5, 2010

OSB 2010 concludes

The Open Source Bridge conference for 2010 has concluded. The four-day conference had lots of great sessions, creative people, and imaginative ideas. The conference itself was a model for creativity, with the fourth day an "unconference" day in which the attendees plan the sessions. Rather than pre-planned (and pre-approved) sessions, the day is filled with attendee-run sessions. The opening time slot is spent planning the rest of the day, with people announcing sessions and pasting them onto a large white-board. Then attendees pick the sessions they want to view, and voi-la! Instant conference day!

I went to a bunch of sessions in the unconference, and they were informative and helpful. The topics were more varied than the regular conference sessions, with people discussing the resurrection of old Hypercard files and travel tips, yet still fun.

I think that the unconference works best at the end of a con. At the beginning of the con, people are still (mostly) strangers, but at the end of the con people are comfortable with each other. Also, its a more relaxed atmosphere.

I don't know how well the notion of an unconference scales. The planning session requires access to a whiteboard, and you can do that in a meeting of 150 people of which 30 want to present. I think it would be difficult at a conference such as OSCON with 3000 people of which 450 want to present. And on-line tools would be a poor substitute -- there is an air of intimacy with everyone in the same physical room.

But then not every conference must have every feature. There are things that OSCON can do with its size that smaller cons cannot. Let's use our resources -- and con size is a resource -- for good purposes.

Wednesday, June 2, 2010

OSB day 2

Today was the second day of the Open Source Bridge (OSB) conference. I was in several good sessions, and again the best conversations were the ones not in sessions but in the hallways. I'm not the only one who recognizes this phenomenon. Today's keynote speaker talked about hacker spaces (permanent spaces for a hacker group -- "hacker" in the "white hat" sense) and discussed the notion of "permanent hallways". A permanent hallway strives to duplicate the hallways of conferences, to let people meet and talk about random interesting topics.

Tuesday, June 1, 2010

Open source means business

Day one of the OSB conference in Portland OR. I attended several sessions that covered various topics. The opening session was for the general conference and had some remarks from Selena Deckelmann (one of the con organizers) and then a keynote on open source and how it can be used to protect journalists.

Other sessions included "How to Give Great Tech Talks", a session on Hypercard and why we need it now, databases and the cloud, search engine optimization, and an evening BOF (birds of a feather) session on Ruby.

I think I was most impressed with the focus on business issues. A number of presentations (not all, but a significant portion) discussed the use of open source solutions to solve large-scale challenges. (Facebook and Twitter use open source, for example.) The discussion of business issues is a change from the open source conferences in 1999. Back then, presentations discussed neat technical tricks and did not talk about business issues. Perhaps it was not necessary, or even appropriate, given the low degree of adoption. Now, the market has changed, and presentations have changed too.

Monday, May 31, 2010

A break from the routine

I took a break from the routine this week-end. I attended a science fiction convention.

The phrase "science fiction convention" often brings up images of nerdy people walking around in costumes from television shows. The picture is accurate, but by no means complete. Yes, there were people walking around in costumes. (There was a costume contest, with prizes for the most elaborate and most popular costumes.) There were speakers presenting on various topics, from science fiction to writing to hard science. There was a vendor room with people selling books, t-shirts, jewelry, and games. There were large events including awards presentations, author readings, and interviews.

Science fiction "cons" are a gathering of people interested in science fiction. Therefore, they are gatherings of people who are well-read, intelligent, and interested in many things. At a science fiction convention, you can find people who are knowledgeable in just about any topic, from Roman engineering to rainforest biology to socio-economic trends in ancient India. (If you find two people interested in your topic, they will probably disagree. But disagree politely and with facts and arguments to support their position.)

I myself met intelligent and interesting people. As I was walking from one session to another, I chatted first with the person running the Library of Congress digital archive project, and then discussed curriculum directions with a professor of computer science at a local college.

This was a week-end well spent. I learned a lot, and contributed to discussions (I hope) in meaningful ways.


Saturday, May 29, 2010

About an hour

Are compile times for projects constant? We like to think that a faster compute will give us better performance, including faster build times. (Those of us with the title "Buildmaster" are very conscious of performance and we constantly tune the build environment.) Yet the evidence does not support the claim.

Many moons ago, I worked on a project that was built in C (yes, C and not C++) and ran under MS-DOS. The source was about 65,000 lines of code (I know because I measured it) and it took about an hour to build (I know because I ran the build). This was in 1992, and we used PCs with Intel 80386 processors running at 25 MHz. Keep that figure of "an hour" in mind.

A few moons ago (not so many as "many moons ago), I worked on a project that was built in C++ and ran under Windows. The source was 750,000 lines of code and the build time was about an hour. This was in 2007; our PCs had Intel Pentium whatever processors running at 500 MHz.

Currently I am working on a project that builds in C++ and runs under Windows. The source is about 200,000 lines of code and build time is ... about an hour. Our PCs are Intel Pentium whatevers and running at 850 MHz. The performance "drop" from the previous project to this project may be due to two things: our use of STL and our build environment. STL and the use of templates places an additional load on the compiler, essentially expanding templates and creating code that is not visible to the programmers. Our environment uses servers for storage, not local disks, so reads and writes must travel "across the wire".

So across a span of almost twenty years, the hardware has improved but build times remain at a fixed time. How can this be?

Here are my thoughts:

Our systems are larger Code size has increased. Larger code takes longer to compile. (And to write, and to debug, and to maintain... but that is another discussion.)

Our languages are more complex The simple code of twenty years ago (mostly C) has been replaced with the more complex languages of C++, Java, and C#. More complex languages require more complex compilers.

Our environments are more complex The (relatively) simple build environment of a PC running MS-DOS 5.0 and some NetWare drivers has morphed into a the complex workstation environment of a PC running Windows with its multitude of services and processes, all competing for processing time. In addition, the authorization model has changed, with current systems talking to the central controller for confirmation of access to each and every file operation.

One interesting thing is the hour time frame. There may be psychology involved here. Perhaps we accept an hour as a reasonable amount of time to build something of perceived value. We (as a group) will let build times increase, until they reach the threshold of an hour. At that point, we look to improve the build process.

In the future, I expect the trends to continue. I expect our platforms to become more complex. I expect virtualization and the cloud to play a role in builds. (A few companies have had cloud-like build technologies for years.) People have been developed new source-control systems that are distributed and not centralized. We have new languages such as C# and Ruby. The Agile Development proponents have pushed for smaller systems and automated tests.  ll of these will affect the time it takes to build the system.

And I also expect build times to remain at... about an hour.


Friday, May 28, 2010

Minor success with Microsoft Excel XLSX files

Today, after much work, I had some success with Microsoft Excel and its XLSX format. For the first time, I successfully created an OOXML XLSX file that was acceptable to MS Excel.

I had tried this in September of last year. At the time, I had a file that could be read by OpenOffice but not by Microsoft Office.

Today I have a program that takes a plain text file, performs various manipulations, and produces a file that is usable by Microsoft Excel.

With this accomplishment, I now have libraries that can read and write spreadsheets for Lotus 1-2-3 and Excel. (OpenOffice is a possibility, but the client doesn't need that format. At least not yet.)

The work for OOXML was a different than the work for XLS format files. For the latter, I had documentation from Microsoft and a set of source code that produced the files. The work was mostly one of research and reading. For the XLSX files, the work was mostly reverse-engineering the OOXML files. Both approaches were helped by our test framework.


Saturday, May 22, 2010

OSCON 2010 - registration

I signed up for OSCON 2010 today. I've been to OSCONs in the past and find them very informative. The con is all about open source and might be the premiere meeting for open source information. They have competent speakers and interesting topics.

The con is in Portland this year. Last year it was in San Jose, and I think a number of people complained. (It had been in Portland for a number of years.) I was one of the complainers. I complained about the food, which I find more appealing in Portland.

I also booked hotel rooms and air travel. I tacked on a side trip to Los Angeles to visit my friend KK. (Since I'm funding the trip, I can take all the side trips that I want.) I got some good deals for the air fare (Southwest) and some ok deals for the hotel rooms. (In previous years the hotels have been cheaper. But I should not grumble.)

The internet made the task possible. At least, possible without the assistance of a travel agent. The arrangements took longer than I wanted, but not too long. (About an hour, altogether. About the same amount of time I would spend telling a travel agent my preferences.)


Friday, May 14, 2010

Microsoft Excel is not quite as accepting

One of the rules of thumb of good programming is: be liberal in what you accept (in terms of input data) and be conservative in what you generate. In other words, don't be fussy with the data coming in because the poor folks who are preparing the data may not understand all of your validation rules. However, be very particular when creating output, because the poor folks consuming your output may not understand all of the possible combinations of your data.

Microsoft didn't follow this rule with their OOXML parsers in MS-Excel. At least, that how it appears to me.

The OOXML formats are advertised as "open", meaning that they specifications are available and people are free to use the files outside of MS-Excel. The OOXML specification is not for a the faint of heart, though. With determination and stamina, one learns that the MS-Excel .xlsx file is a ZIP file with several contained XML files.

Reading an .xlsx file is fairly straightforward. You unzip the container file and then pick through the unpacked XML files with whatever parser you like.

Creating an .xlsx file is a bit trickier. I ran into problems last year, when I was using Ruby to create such a file. I used an MS-Excel-generated file as a model, and created the necessary XML files with the content and cross-references within files. I then zipped the files together and handed the resulting file to MS-Excel. Excel wanted nothing to do with the file. (Yet Open Office would happily open the file and render the content properly.)

Even the simple act of unzipping an .xlex file and then re-zipping it yields a file unacceptable to MS-Excel.

Eventually I was able to create an .xlsx file, by creating an empty ZIP file and adding files to it, one at a time.

The problem is not the contents of the XML files, but the contents of the ZIP file. ZIP files contain not only the (compressed) files but also directory entries. The popular WinZIP program, when zipping up a tree of files, records the files and the directory names of the files, and the directory names get their own entry in the ZIP file.

MS-Excel is unprepared to handle directory entries in the .xlsx file. Creating the ZIP file "by hand" (and not with WinZIP) yields a file without directory entries.

The interesting part in all of this is the behavior of Open Office. It handled the "incorrect" file correctly. Open Office is more accepting of its input than Microsoft Excel. If I wanted to, I could create an .xlex file that was readable by Open Office and not readable by Microsoft Office.


Thursday, May 13, 2010

A lesson from open source

Yesterday I attended the Columbia Area Linux User Group meeting. The speaker was MacKenzie Morgan, a contributor to the Ubuntu project. (Specifically, her title is "MOTU", for "Master of the Universe".)

The Ubuntu prject is run much like a professional development project. With almost 150 contributors, they use tools and processes to coordinate the effort of people. For tools, they use IRC, version control, build scripts, and wiki pages. For techniques, they have "freeze points" for several decisions including tool sets, packages, text strings, UI components, and code. These notions parallel the typical commercial effort, although with perhaps more use of automation between tools and fewer meetings.

Two bug-fixing techniques are worth mentioning. First, bugs are fixed by a team but not assigned to specific individuals. Developers work on the bugs that they think that they can fix. (In the commercial world, bugs are assigned to individuals based on a manager's assessment of individual capabilities.) The Ubuntu approach seems to have less stress.

The second notion involves the code fix for bugs. In the Ubuntu project, defects are fixed in the project under development and then back-ported to prior projects, including the "current release". In commercial projects, the "current release" is fixed first and then the fix is migrated to the version in progress. (Usually. Sometimes the fix is not migrated into the next release.) The Ubuntu approach ensures that the next release has all of the fixes.

Commercial projects could learn a lot from the Ubuntu project, especially those commercial projects that are outsourcing large portions of their coding. Most of the Ubuntu project is aggregating packages for deployment and ensuring that the packages work together. With outsourcing, a commercial project is in a similar role, aggregating components and ensuring that they work together.

Oh, there is one other difference with the Ubuntu project: The work is done (mostly) by volunteers. This also changes the dynamics of the project. (You can't force someone to meet a schedule by threatening to fire them, for example.) Yet Ubuntu delivers releases twice a year, on time and with high quality.


Monday, May 10, 2010

Let OSX handle it

I spent a few days away from the office, visiting parents for birthdays and Mothers' Day holidays. Which means that I got to work with their computers and not mine for a while.

My parents have a collection of computers. Some are Macs running OSX, some are PCs running Linux, and one is a PC running Windows XP. Our goal was to share printers. Easier said than done, as the printer configurations eluded us. One printer on an Apple Airport Express would work from a Mac with "Leopard" but not on the Mac with "Snow Leopard". Three printers on the Linux PC would work from Linux but not from any of the Macs. (We didn't try anything from Windows.)

We did get things working. It seems that we were trying too hard. We were manually configuring printers on the Macs (and must have picked a wrong option somewhere). We removed our printer definitions, executed the line 'cupsctl BrowserPreference="cups dnssd"' and the Macs then found the published printers and picked the right drivers automatically. No muss, no fuss!

Sometimes its better to let the operating system handle things.

Tuesday, May 4, 2010

Perl made me a better C++ programmer

I continue to work in C++. The assignment these past few weeks has been made easier by my experience with Perl, specifically the lists and hashes that Perl offers. Now that I can use the STL collections in C++, my experience with Perl collections pays off. The challenge at the office was solved quite readily with a few STL collections. In the "bad old days", I would have created classes in C++. With the STL, I can use the vectors, deques, and maps with little effort.

I credit Perl for two reasons: it has these classes, and it takes effort to create your own classes -- more effort than C++ (at least for me). When working with Perl, I was pretty much forced to learn the collection classes. They gave me a different perspective on programming and the organization of data. Those perspectives carry over to the C++/STL world.


Sunday, May 2, 2010

The uphill climb to Kubuntu 10.04

I spent a lot of this weekend attempting to update Kubuntu 9.10 to 10.04. I say "attempting" because after several tries, it still refuses to work.

Try number one ended abruptly, with the upgrade telling me that I needed more free space on / and then not-quite restoring Kubuntu 9.10. The broken system would boot but not let me sign in -- X complained that it could not write to /tmp.

I re-installed Kubuntu 9.10 (a two-hour process by itself) and then re-attempted the update. This time the update told me I needed more free space on /usr. (OK, this failure was my fault. When I installed 9.10 I set up the partitions incorrectly.)

So I re-re-installed Kubuntu 9.10 and then attempted the update. This try worked better, in that it allowed the update, downloaded all of the files, and ran all of the installs. But in the end, Kubuntu 10.04 is broken. It boots and lets me sign on. It then gives me a wallpaper (and a pretty wallpaper it is) and nothing else. No menu, no icons, no desktop folder. Nada. The only way to log out is with the power switch.

It looks like I have two choices here: install Kubuntu 9.10 and never upgrade, or get a Kubuntu 10.04 install CD and install that (without using the upgrade).

Or perhaps I should consider another distro. Debian comes to mind, as this machine is really a server and does not need half the applications that Kubuntu installs.


Sunday, April 25, 2010

Production for use

Some languages are made for large teams, and some are made for the individual. It's not just the language, but the supporting framework and tools that are associated with the language.

For example, C# (with the the .NET framework) is a language designed for large teams. The framework is large, much larger than the previous generations of MFC and ATL. The .NET framework has hundreds of namespaces, thousands of classes and tens of thousands of methods. It is much larger than any one person can consume. When building a non-trival application in C# and .NET, the natural response is to divide the work among several people. Not just hire multiple people to work on the project, but to specify different areas for different individuals. The traditional big areas are database design, user interface, and business logic.

The C# language supports this division of labor. So does the .NET framework and the tools offered by Microsoft (Visual Studio and Team Foundation Server). Microsoft recently added the "partial class" feature to C#, allowing multiple individuals to work on separate parts of the same class.

A language designed for a large team leads to large teams. And once you have large teams, you need managers.

On the other end of the scale, languages for individuals work well for a single person but poorly for a team. I think of the languages Forth, Perl, and Ruby as languages for individuals. The languages (and their supporting libraries) are small enough for a single person to understand. Individuals can create and maintain capable applications without the division of labor among multiple workers, and without the oversight of managers.

It seems that languages (with their libraries and frameworks) cannot be both large and small. They must be one or the other. A language -- for example C -- may start small and grow complex -- into something like Visual C++ with MFC -- over time. But those are two different implementations.

Managers have an interest in the selection of language. Languages that lead to large teams are better for managers -- at least better when the manager has an incentive to build an empire. Languages that encourage large teams let managers build those empires. They also let managers partition the work among their staff, and support the manager in that partitioning. Languages for individuals do not encourage large teams, nor do they support the manager in partitioning the work. With languages for individuals, the work of partitioning work among multiple people is harder -- the manager must do it himself, and there is no guarantee that one person will remain within his (arbitrary) limits.

So we have the effect of language design on team organization. Languages for large teams will be used by large teams, but will also grow small teams into large teams. Languages for individuals will be used by individuals, and discarded by large (or even medium-sized) teams.

What does this mean for the future? Will we see a bifurcation of the development world? Will we see one "world" of development for corporations with large development efforts and a second "world" of individual developers? If individuals are not working on corporate projects, what will they work on? (iPhone apps, perhaps?)


Saturday, April 24, 2010

Random meetings

I met a fellow IT worker on the train ride home yesterday. He is a project manager, based in Miami. We chatted for a bit, talking about the technical and personnel challenges of the industry. He was unfamiliar with "The Mythical Man-Month" and "Peopleware" so we exchanged cards and I agreed to send him a list of my favorite books.

The list will include Fred Brooks and Demarco and Lister. I will also include Weinberg's "Psychology of Computer Programming" and possibly Knuth's "Things a Computer Scientist Rarely Talks About".


Tuesday, April 20, 2010

Performance measurements

I've been using Micro-Focus DevPartner Studio for Visual Studio C++ these past few days. (Weeks?) I'm pretty impressed.

It took some time for me to get used to it. Configuring DevPartner Studio is not obvious, although that may be a fault of Visual Studio and its arrangement for plug-ins. But once you configure your projects and build them with the right settings, DevPartner Studio collects run-time information for you. You can collect errors (the old Bounds-Checker type over-runs and illegal references), performance information, or code coverage data. I've been using the performance information, which can be used to improve program run-time performance.

I've made several changes to the code based on the reports. In a few cases, the data showed me programming "goofs" -- things I was doing in C++ that were legal but not quite what I intended. (Yet C++ accepted them, quietly using the default type conversions.) DevPartner's reports allowed me to fix those problems.


Sunday, April 18, 2010

From slightly broken to really broken

I upgraded //grendel today. It was running Ubuntu Linux 9.04, and I used the built-in upgrade for Ubuntu Linux 9.10. The upgrade is a complex process, starting with the download and execution of the upgrade programs, preparation for the upgrade, the download and installation of new packages, the removal of obsolete packages, and a final restart.

The whole process took seven hours. In the end, I have a broken PC. Linux boots, almost. The ROM BIOS starts Linux but Linux never switches to graphics mode, so I assume that X is not running. The system responds to ping but I cannot SSH or TELNET to the system.

Phooey.


Saturday, April 17, 2010

ACM

I attended the local ACM career day for college students today. I was hoping for some network opportunities, but they were not there. The turn-out was light but the discussions of life in the IT world were quite good.


Wednesday, April 14, 2010

Geek networking

I attended the Columbia Area Lunix User Group (CALUG) meeting this evening. CALUG is an informal group that meets once a month to discuss things Linux. Attending these meetings requires some planning, as I must drive to the Halethorpe station and park there, staging my car for the trip to the evening meeting. Travel to the meeting will be easier when the Light Rail "Yellow" line is extended from BWI to Columbia. That branch will run right past the Tenable offices, where the group holds its meetings.

Today's speaker was Daniel T. Chen, a contributor to the Ubuntu sound project. He explained quite a bit, and gave us a good understanding of the challenges of the sound project. The team must deal with various hardware (some of it not easily identified or distinguished from similar hardware) and an epidemic of software.

The Ubuntu team has selected a subset of software, simplifying the task. It seems a reasonable approach, although some folks will be upset that their pet project or favorite sound processor is omitted.

After the meetings, I chatted with a few folks. We talked about calendars, compilers, and file formats.


Thursday, April 8, 2010

Performance is not magic

I and a few other folks spent a bunch of time examining programs today. Our goal was to identify performance problems. We have several C++ programs that are running but slower than we would like. And they are running slower than the older generation of C++ programs, in some areas significantly slower.

It turns out that the performance issues are caused by STL. The previous generation of programs used "straight" C++ without STL. Our new programs make heavy use of the library. We like STL, since it makes programming much easier. The STL allows for easy use of strings and collections, and we don't have to worry about memory allocation...

And that's the cause of our problems.

The STL collections, from simple strings (which can be viewed as collections of 'char' elements) to vectors and maps requires overhead. It's not much, and the implementors have done a lot of work to make the STL work efficiently, but the difference remains. With our data set (hundreds of thousands of items) the small differences add up to noticeable performance problems.

It's like we wrote the program in Java or C#, with their built-in memory management and garbage collection. Actually, it's a bit worse, since STL uses local copies for everything and doesn't take advantage of copy-on-write or multiple references. It instantiates and copies data with wild abandon. (OK, perhaps not wild abandon. But it makes lots of copies.)

Our previous generation of programs was optimized for memory usage, at the cost of manually tracking every object in memory. The programs are also significantly larger, running at five times the number of lines of code. They were fast to run but slow to write and slow to maintain. (Read "slow" as "expensive".)

The realization of STL performance problems raised and lowered our spirits. It raised them, since we finally understood the cause of the problem. It lowered them, as we also understood the solution.

As Heinlein said in "The Moon is a Harsh Mistress": There Ain't No Such Thing as Free Lunch. If we want shorter development cycles and understandable code, we can use STL but the price is lower run-time performance. If we want fast programs, we have to avoid STL (or perhaps reduce our use of it) and write the traditional hand-optimized code.

Do we want to drink our poison from the large cup or the small cup?


Sunday, April 4, 2010

Adjusting

I've been at the new gig since October. That's almost six months, and the "change your password" prompt has arrived again. That's a pretty good clock -- not too frequent, but frequent enough to remind me of the duration.

I'm adjusting to the new situation. The new commute is longer, and I am still working out some of the changes, but things are working. I'm working with a different set of people, and that is going well. I like to think that I am doing good work. My co-workers tell me that I am, yet I have the feeling that I am not doing enough. Is this feeling correct? Or am I using a metric from the former position to measure my current performance? Who's to say that the metric is correct? Perhaps I need to listen more to my co-workers and less to my inner guilt.

Most of my work has been with C++. In some ways, this is good: I'm working with C++ and STL, which is new to me. In other ways, I find it worrisome: the "new stuff" in the market is C# and .NET, Java, Python, and PHP. At least, that's what I see in the advertisements and job postings. (I can't negotiate with folks -- not even in a broad sense -- but I can look at the market and observe the desired skills.)

The dress code is a bit of a puzzlement. Most folks on the train and in the building wear suits and ties. Fitting in with them requires at least a button=down shirt and tie. Yet folks in the office, especially on Friday, are more casual. What to do? Fit in with the immediate office crowd, or go with the larger group?


Sunday, March 28, 2010

My PC is not compatible

A long time ago, at the dawn of the PC era (1981), the fate of PC producers depended on compatibility with the IBM PC. While many companies made computers, Compaq was the first company to produce a "PC compatible", the original Campaq PC. (It was a portable PC, in the sense that it had a handle and could be moved from one location to another. It was not a laptop in any sense of the word.)

Compatibility with the original IBM PC (the model 5150) was important. Many folks wrote software that was specific to the IBM PC, and that software would not work on similar computers such as the DEC PRO-325 or the Zenith Z-100. Hardware designed for the IBM PC would not work in other brands of PCs, either.

Yet a funny thing has happened in the past twenty-nine years: The standard definition of a PC has changed.

The original IBM PC had an Intel 8088 CPU, 64K of memory (in 4146 DIP chips), ports for keyboard and cassette tape, and slots that could hold adapter cards. IBM provided adapters for monochrome video, color-graphics video, and floppy discs. The floppy disks were the 5.25 inch, 360K "true" floppy disks. (Or "mini-floppy" disks, as the 8 inch floppy discs from the IBM 3740 were never supported by IBM PCs -- of any flavor.)

The changes have been gradual. The first change occurred in the IBM PC XT (model 5160), which allowed for a hard drive. The IBM PC supported cassette tapes for storage, and floppy disks if you purchased the optional adapter card; the IBM PC XT dropped support for cassette tape and had a BIOS that supported floppy controller cards and hard disk controller cards. The hard disc was a then-whopping 10 MB.

The IBM PC AT which supported the 1.2MB high-density floppy discs and had a different keyboard. It used different memory chips and had a larger buss, although it accepted cards from the IBM PC and IBM PC XT. The IBM PC AT had a 20 MB hard drive. IBM introduced the Enhanced Graphics Adapter around the same time.

In 1987, IBM introduced the PS/2 (and later, with Microsoft, OS/2). With the PS/2 IBM changed the keyboard layout (again) and also the keyboard connector. IBM added the mouse port, and introduced the Virtual Graphics Adapter (VGA) which supported the (now puny) resolution of 640x480. IBM also changed the floppy disk from 5.25 inch to 3.5 inch with the hard plastic shell, and changed the buss to the Micro Channel Architecture buss.

The PS/2 was received poorly in the market, especially by other manufacturers. They banded together briefly and formed a standards group that produced "EISA", a buss that accepted old PC cards but also accepted new, high performance cards. The solution was a kludge and died a quick death. Yet the damage to IBM had been done, and the definition of a standard PC was no longer IBM's to make.

A few years later, Microsoft discontinued the agreement to develop OS/2, introduced Windows. Microsoft gained enough popularity and power to define the hardware platform. IBM lost its leadership role in the market, and eventually sold its PC business to Lenovo in China.

Looking at a PC today, one sees a modern machine that is nothing like the original IBM PC. Nothing from the original IBM PC works in a "modern" PC -- at least none of the hardware. The keyboard is different, the display monitor is different, the floppy discs (if present, most current PCs don't have them) are different. Adapter cards from the original IBM PC will not work in today's PCI buss. Hard drives from the IBM PC XT or PC AT will not work with today's SATA ports. Memory chips will not fit.

Nothing --again, in terms of hardware -- is compatible.

Software, on the other hand, tends to work. You can buy an IBM PC clone today and run DOS on it. (Well, a later version of DOS, one that understands CD drives. You can't boot DOS 1.0 from a 5.25 inch floppy disc.) I suspect that the original Lotus 1-2-3 program would run, if you could get past the copy-protection built into the software. (The copy protection depended on various quirks of floppy discs, which don't exist in today's PCs.)

The entire set of hardware has changed, yet the software continues. Software has a longer life that hardware. (I'm not the first to recognize this phenomenon.)

But if software is so long-lived, how do we manage it? How should we design something that will outlive the hardware, and probably the initial designers? How do we ensure that it will continue to work on successive generations of hardware?