Wednesday, April 30, 2008

Ouch, no more Bering Sea Online

That nice little crab fishing Google Maps mashup game has been abruptly taken offline.
That smells like a cease and desist must have been issued. I can only speculate who issued the C&D and my guess is the Discovery Channel.
From my completely unqualified point of view, the game did not seem to violate any intellectual properties.
Well, it was a nice game. I thought the platform of google maps shows a lot of promise. I'd like to see other casual online games make use of the platform.

Thursday, April 24, 2008

Data Entry Software Development

Have you ever worked a data entry job?
It's not fun. It's manual labor for your fingers. The limiting factor for how well one can do a data entry job is the speed in which they can type. They need not understand what they are typing, they just need to do it. Somebody else has already done all the thinking.
Heavily front loaded software development methodologies achieve a similar result when it comes to development. If all the decisions are made early and are documented, wouldn't it follow that no decisions need to be made when it comes time to actually implement the designs in the development stage?
If that is true, why not hire data entry people to write code? They are a lot cheaper than developers.
Hopefully the more astute readers have picked up on the sarcasm. It's clearly a ridiculous proposal to make all decisions in a project before the work begins. There are decisions and issues that just are unknown before the work starts.
Some people are comfortable with knowing that there are decisions that have yet to be made. There are even people who are comfortable not knowing any of the decisions or issues and blazing ahead.
There are people who are very uncomfortable knowing that there are unknown issues and decisions. They want it all documented and reviewed before anything begins.
In a nutshell, these two viewpoints illustrate the differences between the Agile and Waterfall software development methodologies. If either extreme were to be adhered to, the chances of succeeding are very doubtful.
When either ideology approaches the other, that's when we enter the realm of practicality.
I think that most people will be comfortable with one side more than the other. For me, personally I believe that trying to make decisions before they are implemented is a wasteful practice. Life changes quickly, and it's unpredictable. To assume that we can have the answers and sufficient information now is a dangerous assumption.
Conversely, performing no upfront decision making is also a dangerous practice. Some decisions need to be made, but the balance is in not spending too much time and effort in making unnecessary or premature decisions.
What that balance is varies from project to project, and team to team.

Company mandated personality tests creep me out

Is it just me, or is there something unsettling about having your employer give you a personality test, like a Myers Briggs Type Indicator test? I get a gut reaction that makes me uncomfortable.
I'll freely say that I'm an ENTJ, with strong tendencies towards the NTJ and barely tipping to the E.
For me, the most unsettling thing about a company personality test is the knowledge that the information will be used by people who don't understand the tests. It's also taking something that's very personal to me, my personality and possibly using it against me.
I had the experience of working at a company that was big into personality testing. I think part of it was the HR director having a father who ran leadership seminars based on personality testing--could have been a coincidence.
I enjoyed learning more about my own personality. It was also nice to learn a little bit about our colleagues' personalities. It didn't really tell us anything new about them though. Surprise, the obnoxious guy that nobody likes has the polar opposite personality than everyone else. It was really just data that we could use to affirm what we already knew.
I think where company personality tests get really creepy is when people who don't even know you and aren't really qualified to understand the tests make decisions that affect you based on your personality test results.
It makes me even more uneasy when I think about how qualified people in HR are in understanding the tests. They aren't psychologists. Sure they get some insight into what they tests indicate, but they have no real understanding how strong of an indicator those tests are to how well a person can do their jobs.
Worse yet, company personality tests that are a one off of the standard personality tests. The ones that someone in HR or management writes up based on their understanding of psychology. It can be really dangerous.
Take for example a team building exercise that the HR director of that company spun on her own. She created an exercise of having people break into pairs and tell the other person their greatest weakness. When they were through, the other person explained the other person's weakness to the group. Marital infidelities, drinking problems, addictions, and other weaknesses came out. How is this information relevant to the company or to peers? I was lucky to get out of there before they did my group. It felt like something they'd have on The Office.
I think the really bad thing about a company one off is they are usually the free option. Real tests with qualified psychologist cost real money. Something that someone in the company creates isn't nearly as expensive.
Probably the most unsettling thing about these tests to me is they are a waste of time. They are a waste of time, because they are really just an intelligence test. Those who are intelligent enough to see that the test is there to evaluate employees based on non-performance based factors and unscrupulous to use that knowledge to their advantage are going to game the test. Why respond with answers that are truthful when I can provide answers that will get me what I want? I think most people who recognize these test for what they are are going to use them to their advantage. Integrity is all well and good, but it doesn't do you much good in a crooked system.
With polluted data, what good is the test really?

Wednesday, April 23, 2008

Nice little list of History's 5 Best Interface Designs

Wired's Gadget Lab has a nice little list of what they are calling History's 5 Best Interface Designs.
I completely agree with the camera controls and the manual transmission. I'm not sure if I'm sold on the mouse yet.
My entry would have been the horse. If you've ever had the opportunity to ride a horse who knows you, you'll know what I mean when I say that they can do what you want before you can think to tell them.

The Typo game

Here's a fun little game that I play with my Office Communicator buddies: the Typo Game.
The game is simple. When you chat with someone and they misspell a word you try and use the same typo in your reply before they correct the typo.
It's stupid fun, but it makes life a little less dull.

Monday, April 21, 2008

QA is not just another place to cut costs

In every professional development job I've had QA is the bastard child of software development, perhaps in all development.
For software development there's a particularly harmful trend to the role of QA. QA is treated like the proving grounds for developers. Software development shops will bring in inexperienced software developers and have them 'pay their dues' in QA until they prove themselves worthy of getting a development role. I think you're better off getting a bunch of people off the street to do your QA testing than you are having people who want to develop perform the testing. Reasons:
  • People who lack specialized computer training are going to reflect the actual users of a system better than people who spend 10 or more hours on a computer a day.
  • People who lack specialized skills are going to appreciate the role of a QA tester more than a person who has the training and ability to contribute as a software developer.
  • People who have gone through the effort of earning a degree in computer science with the intention of programming, will take the first opportunity to get a programming job. It may be outside your organization.
  • Along the same vein, why waste someone's development abilities? Are you having a developer test code directly?
  • Lastly, in the same way that Bruce Schneier describes The Security Mindset people who want to create software don't necessarily have the ability to test software for flaws. There are people who are better suited for finding flaws.
The only thing I really like about the approach of having software developers pay their dues in QA is it provides an opportunity to learn the products and it gives soon to be developers the opportunity to see the flaws in a system before they work on it.
With that said, having people regularly rotate into QA roles may actually be a healthy activity. If the worst side of the product is apparent to the entire organization, then it's difficult to ignore it. This can work especially well if the people who have the ability to make changes are exposed to the correctable pain points of QA testing.
That's not the practice in most places. In reality, QA has a role ranging from a speed bump to actually reducing the number of defects that get propagated to production. Unfortunately, most QA departments trend toward the former. The purpose of organizing people into QA departments is to prevent defects from reaching the customers. I think that everyone can agree that customers seeing defects is bad.
There are a few reasons that I believe QA departments become ineffective.
  • Gaps between QA and production environments. Production equipment can be expensive. Many organizations do not believe that having an identical, or sufficiently similar test environment justifies the expense. If you aren't going to replicate production as closely as can reasonably be done, why bother with having a QA environment? It would be far quicker to go from dev to prod. If you're interested in having a place where defects can be found and deleted, then build a QA environment that's like production.
  • QA environments that are like production, but still have gaps. If there are cron jobs on prod, have the same jobs on QA. If prod is behind a firewall, put QA behind the same one. If you can safely replicate prod's data into QA, do it. Every difference between QA and prod is an opportunity for someone to dismiss a defect as an environmental issue.
  • Productivity initiatives. Managers who want to get the most out of their QA resources will encourage them to script and streamline their work. That's good, take it a step further and have a computer do that. Use Selenium. Seriously, anything that can be scripted for a human to do can be scripted for a computer. Humans don't like that kind of work, let them focus on things that humans are good at, like exploring.
  • Let me restate the last point, people who use software don't use that software according to a script. Even when they do, they won't stick to it for long. They also don't use software consistently. Users forget what they should be doing, or they play around with the interface to discover functionality. If the actual users are going to use the software that way, shouldn't the testers test software to accommodate the users?
To many of those who manage QA departments, the previous four points cost more money than they need to. They see exploratory testing as inconstant and wasteful. They prefer to have detailed standardized test plans that anyone can follow. I say they don't 'get it' when it comes to Quality assurance.
I would argue that if you're going to go through the trouble of writing those test plans, why not write them in Selenium and have computers run them as JUnit tests. Better yet, integrate them into a continuous build server.
Why not keep a QA environment that is as close to production as reasonably possible? If the worst were to happen, it could serve as a failover, or act as an emergency production server.
The thing that probably bothers me the most is when a tester is encouraged to get through their test cases as quickly as possible and discourage them from looking into anomalies.
A big part of this comes from bad scheduling. Testers tend to get stuck under a time crunch because their allotted QA time is cut short because the rest of the schedule fell behind.
QA analysts are often put under the gun to finish their testing as quickly as possible. If you aren't going to give QA time to find the defects, why even go through the motions of having QA? As crazy as that sounds, I believe that most QA departments are sadly adding negative value. If QA only slows the project without finding defects, then they aren't adding value.
We can all agree that even the best of us make mistakes. Those of us who are married can attest that when we make mistakes it will only be a matter of how long, before someone notices. Let's call it the when. In software development the sooner you can make that when, the better. If the when is when the customer is depending on your product, it will cost you much more than if the when occurs when you're designing the application. Testers and developers should strive to make that when now.
To achieve this you need to get the testers involved as soon as possible. Provide them something to test and give them the freedom to get creative in their testing. Have testers of varying levels of experience with the product. Have testers with no experience with the product and let them loose on it. You're going to have untrained users, why not have untrained testers?
The biggest expense with untrained testers is the draw it takes on developers when their defects are allowed to reach developers untriaged. There's a simple way to prevent this, have someone with experience act as the point person for development and QA. That person can find duplicates and can build a relationship with the developers.
Although, this may appear to be expensive, it's cheaper in the long run to invest in quality. Defects are expenses that cost more than can be clearly quantified. They cost in terms of credibility and reputation, which does turn into real money.
Think of it this way:
My first two cars were a Chrysler and a Chevy. Neither one of those vehicles would run for more than six weeks without breaking down.
My third car, was a 1987 Toyota Camry. The Camry lasted me four years without a single breakdown. Even the year when I didn't change the oil.
Since then I've owned another Toyota and a Subaru. Both of them have provided me with a perfect record of reliable transportation.
The next time I buy a car, do you think I'm even going to consider a GMC or Chrysler?

Sunday, April 20, 2008

Bering Sea Online, fun crab fishing mash up

Bering Sea Online is a fun Google Maps mash up game that plays like you're one of the captains on Discovery Channel's Deadliest Catch. I'm sure that any substantial similarities to the show is purely coincidental--especially the kind that the lawyers are interested in.
The premise of the game is you are an Alaskan Crab fishing captain and you're out fishing for Crab.
BSO is very much a massively casual online game. It has a nice system of dealing with time. Time is the primary in-game resource. Every action requires time.
The game queues up time at a rate of 6 game hours every 15 real time minutes. That way if a player were to wait a real hour, she would have 24 game hours to play.
The game is very much still in beta, but it shows decent polish.
The interface is simple and functional.
BSO is a fun way to burn a few minutes out of your day.

Wednesday, April 16, 2008

The cost of cutting cost

My company is voracious about cutting costs. If they can find a way to do something cheaper, they will jump at the opportunity.
I get a unique perspective on the situation because I sit near the supply cabinets. Ordering office supplies is one place where the company is very aggressive on cutting costs. Our administrative staff works hard at finding opportunities to spend less on office supplies. The quality of our supplies are not great. Actually, I find that I have better luck using the pens I bring in from home. They also don't get the engineering paper that I like. They get regular paper and the cheapest stuff they can get.
I like the engineering paper so much that I'll buy it in bulk. I actually had a bit of trouble getting some at the local office store that I ended up buying a bunch from Amazon.
The admins have also figured out that if they lock the facial tissue in a cabinet and only give out boxes to those who ask that they order facial tissue less.
Has that really saved money? We're spending less on our non-Kleenex, but we're asking people to spend more time getting tissues. Instead of grabbing a couple of boxes when they're at the supply cabinet, now people are either not getting it, or spending a chunk of time working with the Admin to get a box.
Did I mention that most of the people on my floor are software engineers and contracting software engineers. On average I would guess that the hourly cost of one of us is around $100. That's $1.67 a minute. The Admin's office is a good 2 minute walk from the supply cabinet. You can count on about 2 minutes of chitchat with her also. So round trip you're spending about 6 minutes.
That extra trip to the Admin's office is going to cost around $10 per box of facial tissue.
Nobody is tracking this cost, but it's happening.
A big part of the problem is that opportunities to cut cost are identified in isolation to their effect on the whole system. It reminds me of people with gambling problems. If you talk to one of them about their weekend they'll tell you that they went gambling and won, say $1000. What they don't tell you about is the $4,000 that they lost.
A problem gambler will look at their gains in isolation to their losses. In the same vein, some organizations look at their cost cutting measures in the same way.

Even if Websphere and RAD were free, they'd be expensive

I was pairing yesterday with a colleague using RAD and the true cost of this tool dawned on me during one of the many times when it froze up for a few minutes. For reasons we could not understand, my partner's computer completely froze for a bit. Not just RAD, but practically the whole computer. We could see through the Task manager that RAD was consuming all of his resources. All my partner did was try to open an XML file. 5 minutes of waste for 2 people to open a file.
It's actually worse than that. Smart people will tell you that the most expensive thing you can do to a development team is to interrupt them and break their concentration. The term context shifting is often used. Context shifts for developers are expensive, the theory goes that developers don't go from zero to full speed instantly, that they need to get into a 'zone' to produce their best code. It often takes about 20 minutes to get into that zone. I agree with this theory. It holds true for me, and every other developer I know has agreed with this theory.
A context shift, or interruption, is actually more expensive than just the time it takes to handle the reason for the interruption. It also costs another 20 minutes of ramp up time.
RAD, has a built in Context switch generator that automatically pulls developers out of their zone. That 5 minute freeze is really a loss of about 25 minutes. That's the real cost of RAD.
Let's put a dollar amount on that. Let's say that it costs $100 an hour to pay for a developer's time and you have 6 freeze ups a day on RAD, each of which cause the developer to work at an average of half speed for 20 minutes, thus producing half the work they would if they hadn't been interrupted.
Each 5 minute freeze up would cost: $8.33 in lost productivity.
Each 20 minute ramp up would cost: $16.66
Each interruption would have a total cost of $25.50
If that happens six times a day you are looking at $153 in wasted productivity.
$153 a day wasted per developer.
$765 a week wasted per developer.
$39780 a year wasted per developer.
I've played a little fast and loose with the numbers, but I believe this is a pretty conservative estimate. Consider the fact that the quality of the code produced by developers is going to suffer, and as a result extra defects will make it to your production environment, you're looking at much higher costs.
For a roughly $5000 per seat tool, you'd think that at the very least it would work well.

Tuesday, April 15, 2008

Hoarding Technology: or how not to secure your job

The technology hoarder is typically a male in his fifties who takes it upon himself to master a critical piece of their company's software and not share it with anyone else. Typically, the motivation for this type of move is the hoarder's perception that their mastery of the technology will assure them of job security.
This move is tragically ironic, it reads like an O. Henry story.
The move you make to ensure your job security ultimately seals your fate to be replaced at the first opportunity.
Why does tech hoarding not work? Managers don't like tech hoarders because it turns their bus number to 1. That is if a bus were to hit one person the company would be in danger. Does anyone want to stake the future of a company on the health of a middle aged programmer? I'm not trying to be prejudicial, but guys who work on computers typically aren't in the best health nor do they typically make choices that prioritize their long term health.
All it would take is a blocked artery and the whole company is at risk.
If my responsibility is an area with that kind of risk, my first priority is creating a contingency plan for the hoarder kicking the bucket.
Tech hoarding is unwise from a pragmatic standpoint because what it really is doing is creating a dependency on the company. All the hoarding will be for naught if the company goes under. Your skills are your greatest asset. When you work for a company, you are investing in it with your skills. When you tech hoard, you are betting all of your skills on a company. If you're hoarding, you're investing all of your skills in a company that has at least one tech hoarder. Is that a wise way to invest?
Tech hoarding is not a good idea from a technical stand point for a few reasons: Tech hoarding is a process of stagnating technology around a business need. And the need is not unique. Another competing technology will one day make it easy to replace whatever the need is that the hoarded technology is satisfying. That's what technology is for. Don't fight it, embrace it.
I know a guy who could be described as a hoarder. He had a script that met a very specific need of a company that I was working at. This was a daily need that hundreds of people had to do their jobs. What is the problem, well the script's ability to meet the users' needs declined. The growing constraints put on the script grew its complexity and the essence of its function.
A new way to meet the need was found. It met the users' needs better than the script ever did. The guy who hoarded that script found himself unable to add value to his company.
Here's the real tragedy of the situation. The script took so much of the guy's time he did not work on anything else. He also didn't learn new technologies. When he found himself out of a job, he also found himself unable to get another job.
How do you avoid this situation? For one, don't hoard technology. If you find that your value to a company is tied to a program, there will be a time when that company will find another program to replace your program and you.
Instead, try to do work that does not require your input once it hits its maintenance cycle. This is far more valuable than tying your job security to a program. For one, you don't have your previous projects bogging you down. People aren't constantly calling you asking how to get things done in your old projects.
By creating software that doesn't have a lot of secret tricks you're also creating a name for yourself. People will notice good code and they will want to work with the people who make good code.
Lastly, it's your name that goes on the code. Do you want to be known as the guy who writes really complex code that nobody can understand? Others aren't going to assume that you're smart, they're going to think that you're a fool!

Saturday, April 12, 2008

Debug by smell

I had lunch with my first development team a while back. They had all worked together at Control Data before I joined their team. When I meet with them, it's usually as a gathering of more Control Data people and they reminisce about the old Control Data computers and tell a bunch of Seymour Cray and CDC stories. It's a lot of fun to hear how computing used to be and how similar it is to today.
Often they talk about the size of the CDC supercomputers and how things were done on them back in the day.
At the last meeting, some of them talked about the dedicated team of electricians that were on hand for the computers and what a tough job it must have been to find and fix faulty wires.
To give a little perspective to this responsibility, I would recommend checking out Mark Richards' book Core Memory. Great book, BTW. Look at the pictures. Ok, see the wires? It was someone's job to go through those and determine which ones are faulty when there were problems.
I found out how the electricians did it.
My buddy Keith who I see at Caribou Coffee was an electrician for CDC. I asked him how they did it and the answer is pretty surprising. He told me that they would find it literally by smell. Usually when they'd have a wiring issue it wasn't the wire, but a solenoid. When a solenoid would burn out it would give off a distinct smell. The electricians could then replace the bad solenoid and carry on. He said the wires almost never went bad themselves.
Kind of gives new meaning to the term code smell.

Electronics recycling event trip report: 4/12, Falcon Heights, MN

We participated in a free electronics recycling program today. It may be hard to believe, but a geek can accumulate lots of unwanted electronics.
As I learned today, so does everyone else.
The recycling program was at the Minnesota State Fair grounds.
People would drive in and stop at one of many unloading sites. At the sites there were volunteers who helped move all of the electronics onto pallets. Each site consisted of a bunch of bare pallets and pallets with heavy cardboard walls--bins if you will. At each site, a forklift moved the full pallets onto semi trailers.
They did a really good job of moving us through. In all, it probably took only about fifteen minutes.
As a geek, it was really cool to catch glimpses of old televisions, computers, and whatnots. It was also pretty amazing to see the sheer volume of unwanted stuff.
Lastly, it is nice to get our closets and storage space back.

Thursday, April 10, 2008

Have you ever taken a job for a test drive?

We had a guy test drive a position in my department. The test driver went through the process of getting hired as a full time employee at my company, including submitting a drug test and signing an NDA. He worked for a few days. Then he quit--presumably because he found out that we really do work very hard at my company.
Here's the odd part, we found out he never quit his old job. He just took the week off and went back like nothing happened.
If that's not weird enough, the guy who he used as a reference was on my team and he is good friends with the test driver's original manager.
I happened to mention this shenanigan in conversation at this week's Groovy Users Of Minnesota meeting. Before I got to the "We had one guy who worked for us for about two days..." when the guy I was telling interrupted by saying the test driver's name.
Turns out they work for the same company and the guy's notorious over there for the stunt.
I can't imagine how someone could muster the intestinal fortitude to pull something like that off and go back to face your colleagues.

More Unit Testing Woes

Last night I talked to a former colleague of mine from the company that makes a really big enterprise Java application! Huge application! Ginormongulously huge! So big that it makes most IDEs, even our beloved IDEA, crawl if you're not careful. So big that they have a specialized team to compile the application each week and provide it as a weekly build to the people who actually write the code.
Developers pick up that build and work off of it as a base instead of compiling all 75,000+ classes--that's how big it was when I left. It's probably much bigger.
My friend was griping about an issue that he and his fellow developers are having with the people who build their HUGE Java application. The issue revolves around the unit tests.
This company is pretty forward thinking, they have a lot of JUnit tests. They call them 'Unit Tests'.
The developers run the Unit Tests locally before they commit changes to the source repository. When the integration engineers pick up the changes for a weekly build they compile them and then run the same Unit Tests.
Sometimes those Unit Tests fail in the integration environment because of differences between the integration and development environments.
The integration environment is compiled inside a vacuum and the developers use the packages in a build. The tests are run by each respective group within their environments.
Having an integration validation test fail is a big deal. There are hundreds of developers who depend on the weekly builds and breakage issues have historically snowballed into person years of wasted time--that can happen quickly when you have over 365 developers.
In the developers' environments they have a set of properties and classes that they can safely assume will always be out in the wild.
My understanding of a Unit Test is that it should be completely independent of environmental issues. You're testing the unit in isolation to make sure that you are not violating it's intended functionality.
If you want to test how units work together, and you should, create other tests that will test that and call that test something different, like integration test.
Back to the issue, neither side is willing to budge on this issue. The developers believe that they cannot write high quality tests that will only run inside a vacuum and the integration people believe that unit tests should run in a vacuum.
I agree with the integration people on this issue to an extent. Unit tests should run regardless of environmental issues. They're meant to test each unit.
I also agree with the developers that testing just the units of their code is not responsible.
In this case I would recommend refactoring their process to accommodate a tiered testing strategy. Have Unit Tests that are Unit Tests and separate them so that just those Unit Tests are run by integration within the vacuum. Also have deeper integration and use case tests that can make use of environmental resources and are more fragile. Keep those tests separate and run them in a suitable environment. And run them as often as resources permit.
This is the reason that I don't care for the term 'Unit Test'. It carries different meanings to different people. There are some who use the term 'Unit Testing' interchangeably with testing, manual testing, or automated testing. The word's meanings have been soiled.
I think it would be easiest to abandon the term 'Unit Test' and to call the specific type of test an Isolation Test. I think that the word isolation forms a clearer picture in peoples' minds that the purpose of that test is to test something in isolation.

Groundhog Day: the meeting

Have you ever had a meeting that went like the movie Groundhog Day? I know people call weekly meetings that rehash the same things Groundhog Day meetings, but I've seen one worse. It's a few iterations of Groundhog Day within the same meeting!
One of my colleagues and I had this experience with members of another team. We had a very specific agenda to discuss a simple change that we wanted to make.
  • We explained the current and proposed future states.
  • We outlined the benefits and risks.
  • We outlined the scope of the change.
  • We provided a task list of what it would take to accomplish our proposed change.
  • We proposed a timetable for the plan.
At each point in the agenda the representatives from the other team asked specific questions and we provided clear and detailed answers.
Ok, here's the weird part. At about the point where we were providing the timetable it was like the other bullet points had not been covered. They started asking the same questions, in the same wording, about the first point again and then the second, third and so on. This cycle happened about four times.
We didn't know if they were messing with us or we didn't explain it clearly enough. It was a very surreal experience.
It was a weird meeting.

Wednesday, April 9, 2008

The best team building exercise I've seen

At a previous job, my manager did one of the best team building exercises I know of.
He purchased the biggest kite he could find and organized a kite flying picnic lunch for the whole IT department.
The kite was something like 5 feet by 9 feet that required 500 pound test line. To fly this kite, we needed a group effort at every stage. Two people had work gloves, so they both held the line. It took two people to hold the kite up to get it in the air.
Someone had the idea to tie a digital camera to the string and have it record the flight. A person had to hold the camera as we ran with the kite and the line to get it in the air.
Once in the air, the kite was a challenge to hold on to. It had enough pull to lift me to my toes. The kite took another team effort to keep it in control. We let the kite out several hundred feet and tied the line to a tree.
We ate lunch and relaxed for a while.
It took another group effort to bring the kite in.
There were two people who stood on the line, one backing the other up. Then they guys wearing gloves would pull the kite down about 50 feet one of the guys would stand on the line. Then the glove guy would zig the line down another 50 feet and the other guy would stand on it. It took a lot of work.
Lastly, my manager rigged a cordless power drill to the spool and used it to reel the line from the ground onto the spool.
When we got back to the office we watched the movie from the camera. It was a really cool experience.
I think what made this such a great team building experience is that it wasn't planned as such. It was a fun activity that required teamwork but it didn't purport to be one of the contrived team building outings that often fail to meet their goals.

Tuesday, April 8, 2008

Mary P can say what I want to say so much better

I wish the people who turned my division into a bunch of center of excellences--awkward wording intentional--would have read this first.
http://www.poppendieck.com/pipeline.htm

DRY with Selenium--how to cheat the torture of wet corporate applications

I can't say enough good things about Selenium. Second only to Firebug, I'd say Selenium is one of the best add ons for Firefox. It also has a really cool hidden talent, it can do your repetitive busy work for you.
What is Selenium? Selenium was designed as a way to create test cases through a browser(Firefox). The Selenium IDE is an addon to the Firefox browser. It's very intuitive to start recording and playing back use cases. There's a red button in the upper right. If the button is pushed, it's recording. You'll see your actions get added to the table in the action table. Once you are done creating your use case. You can play the use case back by pressing the green go arrow. To make use cases really helpful, you can make assertions of values. Just select an element and right click, you'll see a set of options, e.g., verify text present, wait for text present, etc.
You can export the Selenium tests into Java, and other, unit tests also. That's awesome!
I've found this tool to be very helpful for a number of scenarios. It has a very rich command set that has support for DOM lookups, variable assignments, and regular expression matching. What that basically does is allow for creating static tests that can run on dynamic content.
Selenium is easy for just about anyone to use. I had a business user whose comfort zone with computers ended somewhere near Excel, sit down with Selenium and reproduce a complicated issue that he had trouble describing. I also asked him to go through a typical day of work with Selenium recording. He sat at my workstation and put a good hour in performing his typical tasks. Once he was done I saved the test cases and turned them into use cases that were later made into JMeter load tests.
That's invaluable information to a developer. You're recording every keystroke and mouseclick that your users are performing. Instead of guessing how your tool is used, you're knowing how it is used.
Ok, that's not what this is about.
Selenium, got its name because selenium is a treatment for Mercury poisoning. Selenium doesn't cure my personal case of Mercury poisoning, but it helps with a lot of its friends. The thing I hate about my company's corporate applications is that I am constantly forced to repeat a very similar ritual every time I want to get something done. The ritual is like this: find the link to the application, find my password to that application, log in to that application, try to remember where to go in that application, go a few links deep into the application to get to the interface I want to use, then do whatever it is that I want to do. Everything up to that last step is a waste of my time. The only thing that is performed up to there is verifying my identity and letting me wander to the page that is useful to my task. It's a ceremonious mess. For a personality like mine, it's a mentally draining torture to accomplish a simple task. In terms of DRYness it's a soaking wet towel. It's horrible.
Selenium IDE is a kiln to your corporate applications' wetness. Instead of trying to remember the ceremonies, why not let Selenium do the remembering for you?
For example, my team now needs to fill out a help desk ticket to promote code from the development environment to the QA environment. This is an action that takes approximately sixty seconds to perform, front to end. The act of filling out the help desk software request takes about ten to fifteen minutes. It will also take another ten minutes or so for the team that promotes the code to get the request from the application. That's 25 minutes of ceremony to perform 1 minute of work. What a waste, even from our end we're spending 10 minutes of our time trying to communicate a simple request. A disruptive 10 minutes that only serves to record the request for the sake of our auditors.
I hate that kind of waste.
Fortunately, Selenium is the cure for this. All you need to do, is record the ceremony around your task in Selenium. It might take a little tweaking--keep looping through the actions until it runs without failure. You can add wait commands and change some commands to look for DOM elements or links with regexes. Once you have your case, congratulations.
You have just dried up a sopping mess.
Use this knowledge well.

Monday, April 7, 2008

Delaying all irreversable decisions until after the last responsible moment

Yesterday I got to enjoy a drive with a direction giving passenger in an experience that would relate to the biggest risk of delaying decisions.
This passenger gives directions in the most frustrating way for me. The directions themselves are fine, but this passenger doesn't tell them until it's too late.
For example, if you're driving on a 2 lane road in the left lane, she'll tell you you need to take a right about 25 feet from an intersection. There's a disconnect between the directions and the traffic situation.
When there's traffic, it's enough to set me off. I can't handle it at all.
What frustrates me is that she knows what the next action is, but she doesn't say anything.
All I want to hear when I'm making a turn is that I'll want to make a left turn in about four blocks ahead at Washington. That's all I want.
If you don't know the street name, fine, but at least tell me what direction I'll need to turn.
Given directions early, my opportunities to prepare for the next action are maximized. It gives me an opportunity to find adjust for conditions.
The closer I get to the next action point, the less opportunity I have to prepare for it. If I have a mile of traffic, then I have a mile's worth of opportunities to position the car for the next action. By not knowing the next action, the opportunities to prepare for it are squandered.
By giving me directions when I can't act on it, it's similar to software projects where the best decision becomes clear and impossible at the same time. You see the best course of action and are powerless to do anything about it. Instead you need to either turn around or adjust your course. Either way, you're expending more than you needed to.
How do you fix this?
Best advice is to get a Garmin or other navigational tool. In software development, try to identify points of no return ahead of time and make sure that decisions that could be made that depend on that point are made before you reach that point.

Saturday, April 5, 2008

I suggest we stop using Unit Testing

I suggest we stop using the term 'Unit Testing' and use the term testing instead.
The problem with the term 'Unit Testing' is it refers to a specific type of testing. To many people, it means to test a unit of a program. The smallest discrete piece that you can. In the Java world, that usually means that you have Unit Test class files that test a Java class file, and in each unit test class you'd test all of the methods in the class.
There are some who would create unit tests for each method and call the testing effort done. Some tests are better than none, but it's irresponsible and foolish to stop automated testing at this level. Unit tests are far too robust to catch the nuanced causes that can make a program function different than the intent of its creators.
First, let's get rid of Unit Testing as a term. It's already loaded and it means many things to different people. I think this type of testing is better described as isolation testing or micro testing. My preference is to have a set of method level tests that exercise the intent of those methods.
I prefer the term Isolation Testing because the tests are testing the elements of an application in isolation from their dependencies. In isolation testing, it's preferable to have tests that run on mocked data and stubbed services. They're only concerned with the isolated elements.
Beyond Isolation testing, I prefer to also have higher level integration and situational tests. Integration tests exercise how classes work together. They are more fragile than unit tests, but I would argue that they are more important. They will break earlier than unit tests. If it is feasible, use real data for these, mocking or nuke and pave is ok if situational tests are also in the suite.
Situational tests are a full end to end tests that outline scenarios and user use cases. I like using tools like Selenium to capture scenarios from the user's perspective with tons of assertions. If it's at all possible, I prefer that real data be used for these tests. The more fragile tests you have, the better.
The final automated testing piece is defect regression tests. For each defect that is reported against an application, one should try to create tests to replicate how the software fails and test against it happening again. Some Selenium tests could be very useful in doing this, but tests at the integration and isolation level are good too.
One thing to note with the fragile tests: they will break early. It is very important to have an automated build server like Cruisecontrol or Hudson running automated build scripts after each modification to the source repository. The role of the automated build server is to test the code and let the team know when it has failed. It is important that the continuous build server runs a set of tests that completes in a reasonable period of time for checkins.
For one project I had the server run tests that would take 5 minutes to run. It would run 5 minutes after a checkin. 10 minutes after a checkin, the team would get notified if a submission passed or not. This was helpful at the end of the day. A responsible person could check in their last submission of the day and wait 10 minutes to see if the smoke test passed.
The last component of my testing strategy is to include static code analytics and code coverage tools. I like Cobertura and Crap4j. Findbugs is also good. The role of static code analysis is to measure the complexity of the code and the amount of coverage in the unit tests. The more coverage, the better. The less complexity, the better.
The above is my proposal to replace the term Unit Testing and the practices that we've come to know as unit testing in software development. If it is at all possible, I would encourage developers to try and write tests first also.

Friday, April 4, 2008

Mary Poppendieck gives a mean talk

I am now obsessed with Mary Poppendieck after watching Competing On The Basis Of Speed.
It's the second best hour I've spent all week. To put that statement in context I participated in four releases, finished one project, turned a corner on another project, kicked off a project, finally got completely out of credit card debt, and worked on my golf swing this week. The quality time I spent with my wife edges out Mary's talk.

Will the astronauts still be allowed to enjoy Tang?

Interesting read over at Bruce Schneier's blog about the liquid bomb plot. Apparently the terrorists planned to create a mixture of Hydrogen Peroxide and Tang. An interesting discussion ensues regarding the feasibility of using this concoction in a terrorist plot.
I'm personally waiting for the outright ban of Tang, and all granular solids, on flights. I also expect a press release regarding Tang's future in the space program.
Tasty, tasty, Tang.

I'm warming to the Amazon Kindle

I'm really beginning to like the idea of the Kindle. I'm not completely sold on it, but if you were to give me $400 and an iPhone I'd probably buy a Kindle.
I hear it is easy to read, it's light weight, and the battery lasts a long time. The other thing I really like about the Kindle is how the technology is integrated into a very simple interface. You just turn it on and it works, there aren't cables or any other technical barriers to keep people from using it.
I like the idea that I could be at my parents' farm and read news and my favorite blogs. Maybe I could enjoy more than 36 hours out there without feeling like I'm missing the party.
Lastly, I'd love to be able to get a book and just read it. At my house we have an inflow of about 5 weekly magazines, a handful of monthly magazines and a few quarterlies. That's not even counting the trade rags that get sent to my house that I don't read.
What I'm trying to say is that books and magazines take up space and they're only really any good at the moment when you read them. The rest of the time the magazines vy for position in one of the magazine racks, or they're thrown in to the recycling pile. Print is wasteful.
Lastly, the thing about Kindle that I like is the fact that Amazon is not trying to nickel and dime the customers with Kindle's hype. They include the wireless access in the price and the book prices are cheaper than regular books.

Mary Poppendieck writes a mean essay

After hearing raving recommendations for Mary Poppendieck's books I took a little time to visit her web site and read some of her essays, found here. I've read three of them so far and I can see why there's so much love for Mary.

Thursday, April 3, 2008

Words of the day: Salmon and Grizzly Bears

Salmon: noun -- A fish that swims upstream and up waterfalls.

Grizzly Bear: noun -- a large carnivorous mammal that eats salmon as they swim upstream.

If you're going to waterfall, have a few grizzly bears on hand to deal with the salmon.

Measuring success

In most organizations, there is a set of requirements for projects that need to be satisfied for that project to move into a state where it is no longer actively worked. I think the states, finished and canceled are what we call it. We often overload those terms, finished and canceled with the concepts of success and failure.
If we've finished the project according to plan, we've succeeded.
If we don't finish the project according to plan, we've failed.
Consider software. What makes software successful? I judge the success of the software that I use based on its ability to meet my needs. The software that meets my needs best, is the software that I like the best. Take Beyond Compare for example, as I wrote earlier, Beyond Compare meets my file differentiating needs better than any competing product I know.
I don't know how it was made. I don't know if it was delivered on time. I don't know if it fulfilled all of the business requirements. I don't know if it was delivered within the budget. It could have been created by an 8 year old Romanian kid for all I know and all I care.
All I care about is the product's ability to differentiate files. It does it better than anything else I've used. Until, or unless, I find something that does it better I am going to continue to use BC and recommend it to others.
All of the criteria I use to evaluate software does not directly relate to how it is made. Am I saying that deadlines and budgets are unimportant? Yes and no. They are important, but by themselves do not measure the success of an effort. Often times, the metrics we use on projects are incidental to the purpose of the project.
We do projects to meet a need. Our metrics usually focus on how much something costs, when it is finished, and is it what we thought it would be when we started. On budget, on time, and complete. You could meet all three of those goals and still provide something that doesn't meet the needs that the project set out to meet.
On time, on budget and complete are easy metrics to measure. The guys in finance love these systems because it fits in their spreadsheets. If I were solely interested in promoting myself within an organization, I'd like this type of system too. I could game the hell out of it. You measure me by my numbers? Ok, I'll make my numbers better than anyone else.
What is the true measure of success? It's not easily defined. You aren't going to see it on delivery date. You're probably not going to see it within a year of delivery. It's going to be well down the road from when you're finished when most of the people in a project have moved on to other things.
I would like to try measuring the happiness of a team in a project and correlate the happiness of the developing team to the satisfaction and resulting happiness of the customers. I think happiness is as good of a metric as anything else. If a team is happy, it seems like a safe assumption that the team members are likely to be motivated to put extra care in their work and take more ownership of the project. I would also assume that a happy team is one that is more motivated.
There are many elements that can make a team unhappy. I think it is the task of leaders to eliminate and minimize as many of the unhappy elements as they can. Elements that make teams unhappy are the same elements that unhealthily distract team members from their projects.
Unhealthy distractions are the things that cause team members to gripe or despair. Those unhappy activities spread unhappiness. What causes unhappiness? Many things, a short list would include isolating team members from what they enjoy, requiring wasteful and/or tedious administrative work, e.g. unnecessary documentation, disenfranchising team members, and the list could go on.
The irony is that many of the unhealthy distractions are a result of disconnected managers trying to increase productivity by reducing what they perceive as waste. For example, when managers implement web filters that are overeager to banish sites that have the potential for abuse, e.g., youtube, then a negative net effect can occur instead of the desired reduction of waste. There are legitimate productive uses that are lost in the name of stopping lallygagging.
Healthy distractions are distractions that have a positive effect on team members' happiness. If we can accept the fact that team members need breaks, then why not support them in distracting themselves? Provide break areas that encourage collaborative breaking and you'd be surprised what comes out of it.
Facilitate happiness. I think that what we'll find is that if we focus on the happiness that the other metrics will reflect a happy and healthy project.
With that said, still take all the metrics you normally would and add at least one more: a team happiness quotient. Find a way to gauge team happiness and track the hell out of it. More on how to do that on another day. If the team's happiness level is dropping, find out why and fix it.
As time progresses, the other metrics should trend towards the happiness quotient.
Because you really can't tell whether a project is successful at the time when it is delivered, why not keep the metrics from the project and determine how well it met its goals at a time when that can be measured. I am willing to bet iPhones to donuts that the happier projects are going to be the ones that are the most successful.

Wednesday, April 2, 2008

Hurricane Shaped Excellence

Excellence is a term that is very popular within the management circles.
When I learned that my department was reorganizing into a center of excellence I imagined something like Rafael's School at Athens. How wonderful! It would be where we, the developers, could discuss our trades amongst the other developers and focus on how to develop excellently. I can see myself there in the middle. Eureka!!! I am Socrates! Excellent! And there's that guy on the right from the Guns 'n Roses Use Your Illusions I and II albums. I bet he's rocking. Excelsior!
We learned that we were siloed development units. As a developer, I had no idea who any of the other developers were and which of them worked with the same technologies as us.
Then it happened. We reorganized into the CoE.
I wouldn't describe it as being all that excellent, even though we're in the center of excellence. We're now more like interchangeable parts than software developers. The project managers request us by our skillsets and then our resource managers assign us to projects. We're agile, fluid, and excellent. We're lots of things if you ignore the meanings of words.
We're told that we shouldn't support applications, but when something needs to get done, those of us with the knowledge to support them are all in the CoE.
What I believe is actually happening is the people in the finance department have figured out that when we do one type of work they can categorize it as a capital expense, as something that adds value to the company, when we do maintenance type work, they categorize it differently and it doesn't show up as additional value to the company. Weird huh? Ignoring the problems with the company software and creating more software actually makes the company worth more than making the existing software work better ipso facto, developers are discouraged from spending any more time fixing bugs than is absolutely necessary.
To paraphrase William Shakespeare, "therein lies the rub" and to correctly quote him, "To sleep--perchance to dream: ay, there's the rub". Hamlet, Act III, Scene 1
We, the developers, are accustomed to providing service to our business users. When their tools are broken, we fix them, and they appreciate it. Now we're told that we need to minimize that. The correct way of doing business is to not speak with them directly, instead let the project managers handle the business users. We're silage again. Now, instead of being siloed from our peers, we're separated from our customers.
Excellence, now conjures up a different image to me. It's a torus or a hurricane. At the center of excellence there is nothing. It's like the excellence surrounds the center, but within the center it is devoid of the excellence.

New Methodology: Front Ahead Design

Brilliant read from The Daily WTF. Front ahead design or FAD. It's a new methodology that is bound to be widely adopted soon.

Remaining Competitive

"We need to remain competitive." You hear that a lot from PR people after a company announces a big layoff or a facility closing.
I try to imagine in what kind of competition a competitor could be competitive by remaining anything. Competitive napping? They always use competitive with a passive voiced verb and say that it's something they want or need. We want to be competitive. We need to be competitive in the marketplace.
Who is this we? I don't think it's the workers who are losing their jobs who would rather sacrifice their livelihood so the company can only remain competitive. It sounds too much like the company is relaxing into a nice competitive position in a hammock. Ah, so peaceful around the office without those workers making a racket.
Why even use the passive voice when describing the motivation behind a tough decision? It's like they found the most dead verb and then used it in the weakest possible way!
Wouldn't it be more considerable to change the company's rhetoric? At least use an active voiced sentence. I think everyone would prefer to hear something like this.
"We're struggling to compete in the global marketplace. Our competitors are taking advantage of lax employment, accounting, and environmental regulations in a developing country. We cannot compete with their margins with our operations in the US.
We exhausted every alternative and it pains us to say that we're shipping our Human Resources, Public Relations, Accounting, and Finance departments to China. Operations will stay in the US. This move will position us to remain competitive in the marketplace."
A man can dream, can't he.

Cesspool Methodology

A friend and colleague described a horrible project that he participated in. This project was bad, really bad.
I know a few people who were part of the team and all of the members that I respected did whatever it took to get away from that project as fast as they could. The project is also tragic, because it could have been really good, but it was ruined.
The mission of the project was to replace an existing set of applications with a single web application. Each application contacted a different system in the company. There were some old green screen applications and an early generation Java web application. It was all going to be replaced with a nice integrated ajaxed web application.
The sharp developers suggested that they just break pieces of what they know needs to be done off and develop them quickly with the understanding that it would be used by other parts of the program. I think they used a word, ag-eel-aye, must be Italian. I don't know what happened with agile exactly. It sounded like they were doing it for a while on the sly when someone found out and raised hell.
Instead of agile, the methodology that this project team's management chose to use is what my friend described, not as a waterfall, but a cesspool. He explained the differentiation, "a waterfall would imply that progress is being made."
The developers et. al. were set on a mission to document how the existing applications work. Every detail must be documented before anything else gets done. After the weeks of documenting the old applications, the developers were then tasked with documenting how the new application would work, everything must be documented, no development would begin on the new application until the documentation and design of the new application is finished. So throw another couple of weeks away.
Why is this a terrible idea? Well, documenting is to many developers a form of punishment. It's like the myth of Tantalus where you are in a position to look at what you want to do, but be unable to actually do it.
This method is doubly bad for developers. Not only do you need to look at what you want to do and do nothing about it, you also need to look at what was done before and do nothing about it. Couple all of this with the well known, if not documented, fact that documentation is a highly perishable good. It's wrong almost as soon as it's written, and unless there are technical writers, people who actually enjoy documenting things, it's probably not going to get corrected.
There's no payoff in sight for the developers. They're already sick of the project by the time they get to do what they enjoy.
I would urge anyone who finds themselves in a cesspool project to leave immediately.