Tuesday, December 24, 2013

Merry Christmas to all.

Merry Christmas everyone. Here are some fun things for you to play with over the Christmas break.  Remember to put your laptop down and just hang out with those family people.

Some Toys to play with:
First, a list of interesting free stuff, a present from Scott Hanselman.

Some Delphi Specific Fun Things:

Apologies if you've seen these before, but hopefully these will be new to some of you.



  • The Itinerant Developer blogged about a Firemonkey Container for VCL forms in July, I missed it then, if you did too, check it out. Pretty cool. 
  • Nick Hodge's awesome new book mentions Delphi-Mocks by Vincent Parrett.    I'm playing with this right now, to increase my unit-testing powers. If you never read Vincent's intro post about it, this is a good read, even though it's been a year. I get the idea that lots of people still haven't seen the need for this yet, which means they aren't testing enough.
  • If you still haven't downloaded it, you owe it to yourself to go get Andreas Hausladen's latest IDE Fix Pack for your Delphi/RAD Studio.  Thanks Andreas for building such a great add-on. I use it and swear by it, for Delphi XE2, XE4, and XE5 which are the versions I have active coding duties in right now.

Update:  Wow, great link at The Old New Thing to James Micken's blog.  Worth reading everything that Raymond Chen has linked to there.  Every serious Delphi (or Win32 C/C++) developer should read Raymond Chen's blog, The Old New Thing.  But you all are already reading it, right? I especially love the first link, about systems programming, called "The Night Watch".









Monday, December 23, 2013

What do you lose by moving to distributed version control from Subversion? When is using Subversion the right choice?

This is a follow-up post with a few counterpoints to the last post about distributed version control.  There are a few things that I can see that Subversion does better than either GIT or Mercurial. In the interest of fairness, I will point them out here:

1.  It appears that Subversion, given adequate server hardware, and adequate local area network bandwidth, may be a better solution for teams who need to do checkouts more than 2 gigabytes in size, especially if you simply must check in binaries and version control them. Your mileage may vary, in your situation, you may find that the point of inflection is 750 megabytes.    A common example of this is video-game development, where assets easily hit 10-20 gigabytes, and are commonly versioned with Subversion.

2. Subversion has support for partial and  sparse checkouts, something that you don't get with distributed version controls systems,  and all the attempts to add sparse checkouts to DVCS have been so flawed that I would not use them.  The nearest relevant and useful DVCS equivalent  is submodules.  Most users who need to do partial checkouts in subversion will find that they want to investigate using submodules in a DVCS.  If submodules do not meet your needs, then maybe CVCS is best for your team. If you need different users to have different subsets of the repo, using scatter/gather workflows, or otherwise do sparse checkouts (svn co http://server/trunk/path/to/sub/module3 rather than being forced in Git or mercurial to do a clone which is equivalent roughly to svn co http://server/trunk/ ) you may find Subversion meets your needs better.  It is a common rookie mistake  to conflate DVCS repo scope with CVCS repo scope.   DVCS repos are typically simpler and smaller intentionally, rather than the subversion "this server is your whole code-universe" monster-mega-repo strategy that Subversion limits you to. 

3.  Subversion has global version commit numbering, that is your ONE and only Subversion server has a commit counter, and since this global asset is shared among everybody, you can never have a "commit #3" on one computer be anything other than "commit #3" on anyone else's computers. On Git and Mercurial the system generates globally unique hash tags instead to identify commits, and the commit numbers, if available, should generally just be ignored as they are different for each user.  For some workflows you might find this global commit numbering system suits you better than referring to the hex strings of 8 to 24 characters that identify your commits, which have no ordering.

If I've missed anything, I'll add it in an update. Anyways, those are the three golden reasons that I know of that some teams will want to evaluate DVCS, and then stick right where they are with Subversion, which by the way, does seem to me to be the best open-source CVCS out there right now.  I only wish there was a better GUI than Tortoise, and that they'd do a little work to make command line merging less stupid.

Update: Someone was confused about why you would want users to "generate" hash keys.  This means I didn't explain it properly. The version control system generates hash keys, and "hashing" means feeding the input of your changeset through a cryptographic hashing function. The possibility of a hash collision is so low, that you will never encounter one.  Git and Mercurial both use them, and I have never heard of a hash collision, ever.  My only reason for mentioning it is that in a distributed system there is no single incrementing counter available to give you unique incrementing numbers. Not a big deal, but something to know before you leap.  More info here. 

Update 2:  Today I spent some time fighting Subversion working copy folder corruption. Issues like this one that were a big problem in 2008 and 2010 are still a big problem in 2014.  That's bad news for Subversion users.

Update 3: The big thing you'll lose when you leave subversion behind is all the bugs and the missing features. Subversion, and TortoiseSVN are giant piles of bad design, festering technical debt, and the parts that work bug free still have glaring functional deficiencies.  I don't think I'd miss Subversion one bit if I could move the projects that use it to something else, I would.

Update 4 (2016): Subversion is junk. Full of client and server side bugs.  I take back most of my compliments above, I'm sick of Subversion and want to kill it with fire.  Git can do sparse checkouts and with Git and GitLab you can even Lock files (a bad idea, but technically possible now). So there are zero technical reasons to keep using Subversion.



Friday, December 20, 2013

What am I missing out on if I'm not using Distributed Version Control?


As requested, I'm posting my thoughts about Centralized version control versus Distributed version control.  Git and Mercurial are the two tools I'm grouping together in this post as Distributed.

 Centralized version control, like Subversion, Perforce, and Team Foundation Server were, for their day, wonderful things.   They are dated technology now, and for many teams, the benefits of moving to a distributed version control system could be huge.

But I Want One Repo To Rule Them All!

Fine. You can still have that.  You can even prevent changes you don't want to get into that One Repo from getting in there, with greater efficiency.  Distributed Version control systems actually provide greater centralized control  than central version control systems.    That should sound to you like heresy at first, if you don't know DVCS, and should be obvious to you, once you've truly grasped DVCS fundamentals.

Even if having a real dedicated host server is technically optional, you can still build your workflow around centralized practices, when they suit you.

Disclaimer: There Are No Silver Bullets.  DVCS may not be right for you.

Note that I am not claiming a Silver Bullet role for this type of tool.  Also, for some specific teams, or situations, Centralized version control may still be the best thing.  There is no one-size-fits-all solution out there, and neither Mercurial nor Git are a panacea.  They certainly won't fix your broken processes for you.   Expectations often swirl around version control systems as if they could fix a weak development methodology.  They can't.  Nor can any tool.

However, having a more powerful tool or set of tools, expands your capabilities. Here are some details on how you can expand your capabilities using a better, and more powerful set of tools.

1.  Centralized version control makes the default behavior of your system that it inflicts your changes on others as soon as you commit them yourself.   This is the single greatest flaw in central version control. This in turn can lead to people committing less often, because they either have to (a) not commit, or (b) decided to just go ahead and commit and inflict their work on others, or (c) create a branch (extra work) and then commit, and then merge it later (yet more extra work).   Fear of branching and merging is still widespread, even today, even with Subversion and Perforce, especially when changes get too large and too numerous to ever fall back on manual merging.  I recently had a horrible experience with Subversion breaking on a merge of over 1800 modified files, and I still have no idea why it broke.  I suspect something about '--ignore-ancestry" and the fact that Subversion servers permit multiple client versions to commit inconsistent metadata into your repository, because Subversion servers are not smart middle tier server, they're basically just dumb HTTP-DAV stores. I fell back to manually redoing my work, moving changes using KDiff3, by hand.    With Distributed Version control, you can take control,  direct when changes land in trunk, without blocking people's work, or preventing them from committing, or forcing them to create a branch on the server and switch their working copy onto that branch, which in some tools, like Subversion, is a painful, slow, and needlessly stupid process.

2.  Centralized version control limits the number of potential workflows you could use, in a way that may prevent you from using version control to give your customers, and the business, the best that your team can give it. Distributed Version Control Systems encourage lightweight working copy practices.  It is easy to make a new local clone, and very very fast, and requires no network traffic.  Every single new working copy I have to create from Subversion uses compute and network resources that are shared with the whole team.  This creates a single point of failure, and a productivity bottleneck.


3.  Centralized version control systems generally lack the kind of merging and branching capabilities that distributed version control systems provide.  For example, Mercurial has both "clones as branches", and "branching inside a single repo".    I tend to use clones and merge among them, because that way I have a live working copy for each and don't need to use any version control GUI or shell or command line commands to switch which branch I'm on.     These terms won't make sense to you until you try them, but you'll find that having more options opens up creative use of the tools.  Once you get it, you'll have a hard time going back to the bad old days.

4.  For geographically distributed teams, Distributed Version control systems have even greater advantages.  A centralized version control system is a real pain over a VPN.  Every commit is a network operation, like it or not.  DVCS users can sync using a public or private BitBucket repository at very low cost, and don't even have to host their own central servers.


5. For open source projects, Distributed Version control permits the "pull request" working model.  This model can be a beneficial model for commercial closed source projects too.   Instead of a code review before each commit, or a code review before merging from a heavyweight branch, you could make ten commits, and then decide to sync to your own central repository.   Once the new code is sitting there, it's still not in the trunk until it is reviewed and accepted by the Code-Czar.

6.  For working on your own local machine, if you like to develop in virtual machines, having the ability to "clone" a working copy quickly from your main physical machine down into a VM, or from VM to VM, using a simple HTTP-based clone operation, can really accelerate your use of VMs.  For example, my main home Delphi PC is a Dell Workstation that has Windows 8.1 on the Real Machine, and runs Hyper-V with a whole bunch of VMs inside it. I have most of the versions of Windows that I need in there. If I need to reproduce a TWAIN DLL bug that only occurs in Terminal Server equipped Windows Server 2008 R2 boxes, I can do it. I can have my repos cloned and moved over in a minute or two.  And I'm off.

7.  Rebasing is a miracle.  I'll leave this for later. But imagine this:  I want to commit every five minutes, and leave a really detailed history while I work on some low level work.  I want to be able to see the blow-by-blow history, and commit things in gory detail.  When I commit these changes to the master repo, I want to aggregate these changes before they leave my computer, and give them their final form.  Instead of having 80 commits to merge, I can turn it into one commit before I send that up to the server.

8.  Maintaining an ability to sync fixes between multiple branches that have substantial churn in code over time, is possible, although really difficult.  By churn, I mean that there are both "changes that need merging and changes that don't".  This is perhaps the biggest source of pain for me with version control, with or without distributed version control systems..  Imagine I'm developing Version 7 and Version 8 of AmazingBigDelphiApp.   Version 7 is running in Delphi 2007, and Version 8 is running in Delphi XE5, let's say.  Even with Distributed Version Control (git or mercurial), this is still really really hard.   So hard that many people find it isn't worth doing.  Sure, it's easy to write tiny bug fixes in version 7 and merge them up to version 8, unless the code for version 8 has changed too radically. But what happens when both version 7 and version 8 have heavy code churn? No version control system on earth does this well. But I will claim that Mercurial (and Git) will do it better than anybody else.   I have done fearsome merge ups and merge downs from wildly disparate systems, and I will take Mercurial any day, and Git if you force me, but I will NOT attempt to sync two sets of churning code in anything else.  I can't put this in scientific terms. I could sit down with you and show you what a really wicked merge session looks like, and you would see that although Git and Mercurial will do some of the work for you, you have to make some really hard decisions, and you have to determine what some random change is that landed in your code, how it got there, and whether it was intentional, or a side effect, and if it was intentional, if it's necessary in the new place where it's landing.  If it all compiles, you could go ahead and commit it. If you have good unit test coverage, you might even keep your sanity and your remaining hair, and your customers.

9.  Mercurial and Git have "shelves" or "the stash".  This is worth the price of admission all by itself. Think of it as a way of "cleaning up my working copy without creating a branch or losing anything permanently". It's like that Memory button in your calculator, but it can hold various sets of working changes that are NOT ready to commit yet, without throwing them away either.

10.  Mercurial and Git are perfect for creating that tiny repo that is just for that tiny tool you just built.  Sometimes you want to do version control (at work) for your own temporary or just-started-and-not-sure-if-the-team-needs-this utility projects without inflicting them on anybody else. Maybe you can create your own side folder somewhere on your subversion server where it won't bother anybody, or maybe you can't do that.  Should you be forced to put every thing you want to commit and "bookmark" up on the server as a permanent thing for everybody to wonder "why is this in our subversion server?". I don't think so.

11.  Mercurial and Git are perfect for sending a tiny test project to your buddy at work in a controlled way.  You can basically run a tiny local server, send a url to your co-worker and they can pull your code sample across the network.   They can make a change, and then they can push their change back. This kind of collaboration can be useful for training, for validation of a concept you are considering trying in the code, or for any of dozens of other reasons.    When your buddy makes a
change and sends it back, you don't even have to ask "what did you change" because the tool tells you.


Bonus: Some Reasons to Use Mercurial Rather than Anything Else

TortoiseHG in particular is the most powerful version control GUI I have ever used, and it's so good that I would recommend switching just so you get to use TortoiseHG.  Compared to TortoiseSVN, it's not even close.  For example, TortoiseSVN's commit dialog lacks filter capabilities.    SVN lacks any multi-direction synchronization capabilities, and can not simplify your life in any way when you routinely need to merge changes up from 5.1-stable to to 6.0-trunk, it's the same old "find everything manually and do it all by hand and hope you do it right" thing everytime.

Secondly, the command line. The Mercurial (HG) command line kicks every other version control's command line's butt.  It's easy to use, it's safe, and it's sane.

SVN's command line is pathetic, it lacks even a proper "clean my working copy" command.  I need a little Perl script to do what the SVN 1.8 commandline still can't do. (TortoiseSVN's gui has a reasonable clean feature, but not svn.exe). Git's command-line features are truly impressive. That's great if you're a rocket scientist and a mathematician with a PhD in Graph and Set Theory, and less great if you're a mere human being.   The HG (mercurial) command line is clean, simple, easy to learn, and even pretty easy to master.  It does not leak the implementation details out and all you should need to evaluate this yourself is to read the "man pages" (command line help) for Git and Mercurial. Which one descends into internal implementation jargon at every possible turn? Git.
 I've already said why I prefer HG to GIT,   I could write more about that, but I must say I really respect almost everything about GIT.  Everything except the fact that it does allow you to totally destroy your repository if you make a mistake. That seems so wrong to me, that I absolutely refuse to forgive GIT for making it not only possible but pretty easy to destroy history. That's just broken. (Edit:  I think that this opinion in 2013 was based on inaccurate information, rewriting history is acceptable, and in fact, permitted in both Mercurial and Git, and the chances of a well meaning developer accidentally erasing his git repository remains a very small risk.)

Side Warning


Let me point out that you need a backup system that makes periodic full backups of your version control server, whether it is centralized or distributed.  Let me further point out that a version control system is not a backup system.  You have been warned. If you use Git, you may find that all your distributed working copies have been destroyed by pernicious Git misfeatures.  If you choose to use Git, be aware of its dark corners, and avoid the combinations of commands that cause catastrophic permanent data loss. Know what those commands are, and don't do those things.

 Get Started: Learn Mercurial

If you want to learn Mercurial,  I recommend this tutorial: http://hginit.com/

I really really recommend you learn the command line first, whether you choose to learn the Git or the Mercurial one. Make your first few commits with the command line.  Do a few clone commands, do a few "show my history" commands (hg log), and a few other things. If you don't know these, you will never ever master your chosen version control system.  GUIs are great for merges, but for just getting started, learn the command-line first.  You're a programmer, darn it.  This is a little "language", it will take you a day to learn it.  You will reap benefits from that learning time, forever.

And The Flip Side:  Reasons to stay with Subversion



(2016 update: Almost every team I know that hasn't moved to Git, has at least some people on the team who wish they could, and at least one stick in the mud in a position of power, overriding that team instinct.   I've changed my mind about Mercurial versus Git as well, and recognize that the programming world has chosen Git.  So it's time to learn how to fit in, instead of being the guy that sticks out because of his outlier opinion.   The tools and the community around Git are superior to the tools and community for Mercurial. )

Thursday, December 19, 2013

Book Review: "Coding in Delphi" : A new Delphi book by Nick Hodges

Right now you can only get it by buying Delphi XE5.  I just finished reading it, and I love it.  I think that the title needs to be understood with the word "Modern" put in there somewhere. Maybe "Coding in a Modern Delphi Style" would fit the content well.  The title is fine, but I'm just putting that out there so you know what you're in for.

Did you want to know more about the proper use of Generics, not just the consumption of a Generic type (making containers using TList<TMyObject> ) but also enough to start really building your own Generics?  Do you want to learn how to dive into Spring4D (Spring for Delphi) and use its powerful capabilities?  Do you want to learn to use fakes (stubs and mocks) to isolate classes so they can be truly unit tested? Do you understand the real difference between unit testing and integration?  If you're not using isolation frameworks, or you don't know how to use them, or isolate your classes, is it possible that there's a whole new and better  level of test coverage?

Do you feel that you were late to the party, and nobody ever explained all the things that the cool up to speed delphi developers know?   This book is conversational, friendly, approachable,  code-centered, and although I already understand 90% of what is in the book, I still found there were some parts that really made my head bulge a little.

The book focuses on the raw coding aspects, and low level coding concerns, not on high level issues like user interface, or even architectural concerns like MVC.   The well known and beloved core Object-Oriented principles in the S.O.L.I.D. family, as preached by Uncle Bob Martin, are mentioned and given brief homage, but not expanded upon.  This is well and good, or Nick would still be writing this book and it would be eight times its current length.

Having finished reading the book, I'm going to go download all the add-ons and frameworks and libraries that Nick has gone over, and spend some time playing with them, and then I'll go read the book again.   I think that if you try to absorb everything in the book at once, you might get a bit of a headache. Instead I took it in over a couple days, let it wash over me and then I'll go back over it again in more detail.

I believe so strongly in the KISS and YAGNI principles, that I still don't think I'll be using all the shiny things that Nick talks about in this book, unless I can clearly see how using them benefits both the developers and the users of the code.  But I'm so clearly sold on the benefits of isolation, and testability, and design for quality and testability, that you don't have to sell me any further. It's just a matter of using these technologies in a way that can be done without substantially degrading debug capabilities.  (That is the secret down side of interfaces, and isolation, and dependency injection.  Your code can turn into a mysterious pile of mysterious semi-readable angle-bracket soup.)  

I am hopeful that there is a way to pacify both the desire in me to build testable, quality systems, and the desire to build readable simple systems that don't go down the Hammer-Factory-Factory road to hell.

If you want a copy, go buy XE5, or wait until some time next year when the book will have a wider release.  It's great though. You really should get it.  Nick joins my other favorite Delphi authors, like Marco Cantu, Bob Swart, Xavier Pacheco, and Ray Konopka on the virtual Delphi book authors shelf in my coding book collection, and it's a fine addition to the canon of Delphi books.

  

Five Code Upgrade Snares and How to Avoid Them

Here are five upgrade snares that have slowed progress or even stalled attempts by well meaning competent developers who have tried to migrate large code-bases up to new Delphi versions.  

1. The "I must not rewrite anything" snare.


The person who can not decide that something needs to be fixed will never upgrade anything. The person who values the patch he made 10 years ago to an ancient version of TMainMenu will never upgrade.  This is the first snare, it's a bear trap, and it's got your leg.

To avoid it, remember that your goal is to arrive at code that builds in both Delphi 6/7/2007 and XE5.
Don't panic.  Be calm.   Now continue.

2. The "I must rewrite everything" snare.


Having escaped the first snare, the hapless code-upgrader stumbles a little further along, is heartened by having survived the first trap, and decides to rewrite almost everything.   He was last seen stumbling into the Mojave desert.  His current whereabouts are unknown.

To avoid this snare, just as the point above, concentrate on making the code build, and pour your new code efforts into increasing unit test coverage.

3.  The "Failure to Recognize that We are in Boston, rather than Chicago" snare.


In Delphi 6, or Delphi 3 or Delphi 7, there were some features that are no longer current recommended Delphi features.  The BDE is one of them. If you think you can upgrade from Delphi 6 to Delphi XE5, and continue to use the BDE for another 10 years you're going to have a bad time.  Not just the BDE, but a whole host of things have gone to "should not use" status.  Also there are new VCL framework features including TForm.PopupParent which should be used to replace a lot of your Z-Order hacks that you invested huge time in in Delphi 6.  You can stage your changes, and you should stage them. Stage 1 might be "it compiles and passes unit tests, but uses BDE". Stage 2 might be "goodbye BDE". Stage 3 might be "it properly reads and writes the unicode data formats I want it to read, such as XML".

The second half of this trap is you assume that porting means a series of mechanical changes that you don't quite understand.  Don't make a change you do not precisely understand. Unicode porting usually leads to a lot of naieve things being done in your early days. For example, on your first day of porting, if you decide you'll change "All my Strings will be AnsiStrings" you're going to have a bad time. There is no "100% always do this" law, you must know what you're doing. But you should always choose AnsiString when you need a single byte string, and you should always use String (UnicodeString) everywhere else.  Read the excellent Unicode porting materials provided by Marco Cantu and Nick Hodges.  To avoid this trap, read these guides, and follow them.

4.  The "Hacks and Customizations" snare.


One day in 2001 you decided to modify TButton. Then in 2002, you decided to modify TForm, and in 2003, you decided to modify TDataset.   You had to, at the time there was no other sane choice.   But you will now have extra work to port your not-quite-VCL-or-RTL-based codebase up to your new Delphi version.  My suggestion is to make the code BUILD without VCL or RTL customizations, and function well enough to pass unit tests.  Then investigate new fixes for your previously required workarounds that are possible with the new framework.

I have also been trapped by this one, in the form of heavily customized Developer Express commercial third-party components that I then had to port changes from the 2004 version to a modern version.   Due to the nature and extent of my changes, this took me months.  I would have perhaps attempted to do more with subclassing, rather than modifying the component itself in the future.  However, the problem is that you can only do so much with subclassing. Sometimes you're really stuck and you do have to modify the original code.

Other than not getting into this trap in the first place, the way out is with care, with unit testing, and with proper version control tagging.  Attempts to remove customizations should be accompanied by tests, automated or manual.  Preferably both.

5.   The "Wandering in the Darkness, Surrounded by Grues" snare.


If you do not have unit tests, you will not know if you broke things.

When the code compiles and passes unit tests, you are on the path.

When the code compiles but passes no unit tests, you may still be on the path. But you don't know. The light is dim.

When you have spent several days in the dark, and continue making changes to code that does not compile, you are certainly not making progress.

You have been eaten by a grue. You have died.



Avoiding this one deserves more detail in a future blog post.  But it is mostly about unit tests, live use of your app after each change so you know if you broke the app's ability to run and do fundamental things, and the same sort of education and preparation steps as the previous point.  I have some more specific ideas on this one, that will be explored in a part 2 post.


If you've got another snare that we can all learn about then avoid, please post it in the comments!


Saturday, December 14, 2013

Effectiveness - How to increase it.

Nick Hodges recently posted a link to a guy named Mike Hadlow's blog post about working hard versus working smart. I think it's brilliant.   I have often said that it's better to be lazy (while getting more done) than hard working (if by being hard working you are simply banging heads repeatedly into walls rather than walking around them), as a developer.  I believe you measure results, not hours that bums are in seats.

Anyways, rather than sticking on that point and belabouring it, I want to talk about the practices that have made a real difference in my effectiveness, in the order of their importance.  I promise to be biased, opinionated, and honest.   I do not promise to have scientific proof for any of this, only anecdotes and war stories. I use Science when appropriate, but I approach the whole business as a craft.

1.  Bisect and Isolate Problems to Determine Root Causes

Do not guess why something is not working, learn why it is not working.  When you know, you know, when you do not know, you can not be an effective developer.  If you do not understand and locate the exact source of a problem, you do not know if you fixed it.  This is the single most important skill a developer who gets things done must have.

2.  Learn New things, and learn Old things too

If you love what you do, you want to know the things that will make you more effective.  I do not know any highly effective developers who know only one programming language, one approach to problem solving, or who hate reading books about their craft.  Zero.   Do you realize how much amazing stuff is out there that you could download and play with? You haven't downloaded and tried Pharo?  Go? Python? Clojure?   None of those?  Sad, really sad.    You mean to tell me it's 2013 and you haven't read the Mythical Man Month?   Not just the one essay,  the whole book.  Oh, I weep for you.

3.  Clean code

This has always been a personal point for me. I hate leaving my code in a place where it emits warnings or hints.   I do not like code that has no test coverage.  When code is not exactly right, sometimes I say it has a "code smell".    I mean the same thing by Clean Code that Robert Martin means by it, but I like to stress that it's an ideal that you move towards, rather than a thing you achieve.  The perfectionist streak in me detests the idea that any code ever is clean. It never is.  It could always be cleaner. The standard is "what is clean enough that my co-workers, my employer's customers, and my own ability to work are not impeded". That's clean.

4.  Do Science

Admit it you thought I was going to harp on unit tests as a separate point, didn't you? No. I'm leaving it as a sub-point or reasonable practice within the general norms of Clean Code.   But clean code is not enough or even my main goal. I am a person who, like a mathematician in search of an amazing new result, is pleased by beautiful mathematical perfection in my work.  I find that the most effective way to find where those beautiful bits of software are inside your current working product is to be a scientist.  You want to know what a variable does? Take it out and find out.   You want to know why this application was designed with a global variable that makes no sense?  Rename it and find all the places where it's used instantly.  You have version control. Poke your code, prod it. Abuse it.  Run it on really slow computers so you can see the screen draw and figure out why it's so flickery and crappy, and make it better.   Be a scientist.   Do science.  Ask question. Form hypothesis.  Test hypothesis.  Revise hypothesis.      Remember that working copies should be cheap (you do use distributed version control right?) and easy and fast to revert any experimental changes you make.


5.  Use structure effectively

In the beginning was the machine.  Darkness moved on the face of the deep, it was formless and void. The programmer had a manual of opcodes.  The programmer built programs out of opcodes, which were data values stored in words (registers) in the machine. The programmer somehow flipped switches, or clipped bits of paper out of cards, and somehow got those 1s and 0s into the machine, and pressed a button or flipped a switch and the code ran. Or didn't.  Mostly it didn't run.  The programmer went back and tried more stuff until some lights turned on, and the early computer printed a character on the teletype.  Only by sheer brute will power was anything ever accomplished by this early programmer, with no Language tools.      The evolution and development of compilers and computer programming Languages was a response to the futility and low productivity of these early tools.   Practices, and ideas about how to use these languages grew and evolved along with them.    Today you have Object Oriented, Aspect Oriented,   Post-OOP, Dynamic and Statically Typed, Functional, and all other manner of attempts to provide Just the Right Structure for your program.  

The two cardinal sins against the effective use of structure are to put too much structure in,  so that the structure has no purpose or reason behind it, and not putting enough structure in.   It is my considered opinion that the Design Pattern movement began to love using Patterns so much that it overdid it.  I have seen a Factory for constructing Singletons that have a single method, that could have been implemented as a non-OOP procedure, and would have been better off for it.    Mostly Delphi code bases are too far the other way. They tend to have 50K line methods, 100K line units, and all the code is in the .pas file that Delphi created for you when you created your main form. Because making new units with reasonable cohesion, and minimal coupling,  was too hard.

When a codebase has not got an effective structure, the productivity of everyone working on it is sapped to nearly zero.

6.   Use tools effectively

If you do not learn how to use the Delphi IDE, command line compiler, build systems, continuous integration tools,  profilers,  static analysis, and any other tool that exists to help you, you are less effective than you could be. If you never learned how a distributed version control system works, even if you don't actually use one for your company's central repository, you're missing out on one of the single most powerful software tools ever invented.   Mercurial.   There's git, too, if you like accidental complexity and technical hurt.   (I never promised objectivity here, this post is 100% opinionated.)

7.   Build new tools or frameworks but ONLY when it is effective to do so.

Sometimes a new tool is a script written in 100 lines of Python (for fans of code that is zen like in its beauty) or Perl (if you like accidental complexity and revel in it). Sometimes that 100 lines takes 10 hours to write, and saves you 100 hours, and then saves you another 100 hours a year later when you need it again, or something almost like it.  This is something that people used to think was only possible if you were a LISP hacker.   Nope. You can build tools to accomplish anything you can imagine and then precisely formally define in code.   This meta-coding mentality can go TOO far.  See the famous "I hate frameworks / hammer factory factory" forum post  for more on that. You have to build tools when, and only when, they are justifiable.


What are your top effectiveness tips? Please share.



Wednesday, December 11, 2013

Modernize your codebase: Inspiration to Ditch your Ancient Delphi Version

I still see a lot of people running old versions of Delphi, the most common one people freeze at is Delphi 2007, because of the Unicode changes to String types, but I have also seen the odd one stick at Delphi 7, or even Delphi 6.

Here are some suggestions on getting out of those Don't-Upgrade-Ever Tar-Pits.

1.  You may lack motivation. You may not realize how much you need a modern Unicode String type.  You start by believing you can upgrade your AnsiString-forever, or shortstring-forever codebase. Yes, yes you can.  The internet is Unicode, and every computer is connected to the Internet 24/7.  WideString is not enough.  The modern String (UnicodeString) in Delphi is fast, flexible, and internet ready.   I've spoken to people who aren't upgrading because they assume the new String is slower. On some benchmark code (Sieve of Eratosthenes and some string crunching code), the latest Delphi XE5 is actually faster at UnicodeString (String) operations than Delphi 7 was at AnsiString (String) operations, and basic IntToStr and StrToInt functions.

2. Upgrade for Productivity boosts. The IDE INSIGHT feature is fantastic.  Once you get used to it, I find it hard to use a version of Delphi that lacks it.  It's google for your IDE.       I like to supplement it with the GExperts file open dialog, but the combination of the two is magic.

3.  You've been down the Ancient Delphi Codebase road, and you know what a pain things like Z-Order bugs are.  If it helps you decide to move up,  remind yourself that the VCL in 2007 and later supports proper Z-Order parenting, so if you're on Delphi 6, or 7, it's time to say goodbye to silly Z-Order bugs. (Window A behind Window B).  If your code is not Unicode ready yet, you can get XE5, and it will entitle you to a Delphi 2007 license (previous versions access), get your Z-Order bugs fixed, then continue to work towards proper modern Unicode readiness in your codebase.

4. Maybe you don't really need 64 bit Delphi, but having the possibility open sure is good, right?  Or maybe you would like to build something 64 bit? Running Windows 64 bit? Want to access lots of memory? Want to write a shell extension?  32 bit may be great for lots of things, but having 64 bit in your bag of tricks is worthwhile.  64 bit Windows will be so mainstream eventually that (like Unicode) the whole Delphi world will eventually move to it, and Win32 will eventually fade away. Yes. Yes, it will.  Do you still want to be building Win32 apps when Win32 goes the way of Win16? Remember NTVDM? WOW is the new NTVDM. It isn't where you want your apps to live, is it? Inside the backwards-compatibility box?  So be ready.

5.  Mac OS X support. You can write apps for OS X with Firemonkey.  Maybe this isn't a "need" thing, but it's nice to have in your tool-belt.    Being on a modern Delphi version means having this around when you want it.

6.  iOS support.  You can write native iPhone apps with either RAD Studio XE 5 or Delphi XE5 + the mobile add-on.   Your kids will think you're cool, if you make them a little App for Christmas. Right? Of course right.

7. Android support.  You can write Android apps using the Delphi compiler for native ARM and Android NDK.  

8.  If you're stuck on a version prior to 2010 you don't have proper platform support for Windows 7, and Windows 8.  Things like the file-open dialogs, etc, changed at the Vista level, and that support was never in Delphi 6,7 or 2007.  I think it's kind of funny that Delphi grew proper Glass (DWM) support, and then Microsoft suddenly yanked Glass out of windows 8.  But maybe it will come back in Windows 8.2.  You think?  Anyways, the core VCL on modern versions like Delphi XE5 is a lot less painful to support Windows 7, 8, and 8.1 on, than the creaky old version you're using, if your version is older than, say Delphi XE.

9. VCL styles.  This is an amazing way to add a "face lift" to your VCL application without rewriting it.  There is a bit of work involved, but a lot less than you'd think.   The amazing third-party add ons like RRUZ's awesome helper library make it easy to make your VCL app look great.  If you haven't got at least Delphi XE2, you don't have this great feature.  To go with that pretty Style support, add some VCL gesture support.  Support touch gestures on a  touch-screen laptop or Windows Surface Pro tablet, using VCL gestures. Swipe, pinch, or make your own custom touch gestures.  Combine this with the VCL styles and you can modernize your look, and functionality from "90s" to "up to date" pretty quickly for your core Win32 desktop apps.

10.  Delphi Language Evolution. If you're still on Delphi 6 or  7, and you don't have Generics, or modern RTTI, or the new module grouped-naming system (Unit Scopes), you might be surprised how much nicer and more readable these things make your code. For example, I love my Uses clauses to be grouped with all the System units first, then VCL units, then my third-party library units. The ones that have no dots in the namespace are either primitive third party libraries, or my own units. It's amazing how convenient and nice and "expressive" these longer names are. I love them.

I hope this list gives someone the motivation they need to kick themselves in their own butts and get their codebases out of Delphi versions that should be retired, and moved onto a modern Delphi version.  You'll thank me.

Oh. And how do you actually DO it when you get the nerve?  I'll write more about that later.   But for starters, I suggest you try to write some Unit Tests.  Remember that while you are in progress of modernizing your codebase, you can do it without ever breaking your application or introducing unknown behaviour.  I'll write more about that approach, in an upcoming post.


Update:  If someone is sceptical about the value of updating, I ask them, what does it hurt to download the trial and try it?   You can do almost anything you want to try with the demo version, get your code mostly ported, try to port your components up,  get stuck, and post questions on StackOverflow. If you really love Delphi 7, great. It was a great version.  Enjoy it until the heat death of the universe, or at least, until the sun goes super-nova. No problem.  But most of us moved on long ago, and I just wanted to let you know why we did.

Saturday, November 30, 2013

A Code for Software Professionals

Robert Martin defined "Professionalism" in the context of "professional software engineering" as the "disciplined wielding of power".  You can catch the talk at a ruby conference in 2009 which is called "What killed Smalltalk could still kill Ruby".   You could substitute "Delphi" or "Your Career" in place of the word "Ruby" there, if you think that having Ruby in the title of the talk means that anything in that talk is unapplicable to you, Delphi programmer.  If you think that your use of Delphi makes you immune to the ill fates that befall other developers, other teams, using other languages, then you have the very disease at the heart of Bob's talk, and you are exactly the person who needs to watch the video here.

About 42 minutes into the talk, he switches from a passionate cry to developers to commit to test-driven development practices, to talking about Professionalism and gives the definition about "disciplined wielding of power".  It's excellent, and I agree with everything he says.

I would like to give my definition of what a software professional, a member of a team, large or small, or even a single developer working on his or her own, should be doing, if they wish to be a consummate software Professional.

I would also like to hear your professional credo, if you have one.  All of this presupposes, I think, that we are people motivated not purely by money, by power, by position, by a search for comfort and material things, but rather, motivated equally by a desire to do a job well, to do something that is objectively Good, worthy of the limited time we all have to do what we choose with.  In the long run, we are all dead.  So why are we doing any of what we do?   I would say that to do a thing well, to do it professionally, as your career, your vocation, your job, from 22 when you graduate college, until you retire at 65, or leave this life, means, I hope that writing software is something you do because you love it, and you want to do it well.

Here are my five cardinal rules. Yours may be different. Mine flow from a common core conviction that everything I do has a moral component, and that I will be ultimately happy if and only if I do what is objectively morally good. Why I think that, is off topic for this blog. But the fact that I do think that way is what illuminates and ties together the five points of my personal credo as a software professional:


1.  Do the work you get paid to do. I am a professional. Part of the meaning of that word is that I get paid to write what the people who pay me tell me to write, and I build it according to their rules, conventions, styles, and preferences.  When I'm finished, I got paid, and I got the satisfaction of doing a job well and they own what I built. I respect their intellectual property rights. I don't steal, and I don't dissemble or delay. I also keep my employer's trade secrets and source code private.   I also believe that wasting a day that I got paid to do software by doing nothing is stealing, and so I don't do it.  When I'm out of ideas about what to work on next, I ask what I should do. I try to work on the most important stuff first, so that what I do in a day moves the ball forward as much as I can.

2. Tell the truth, and shine light, even when it's uncomfortable, but be kind to people, when you choose how to speak that truth.  I try to be honest.  I mustn't ever lie.  I also try not to blame people.  I try to take blame, accept responsibility.  If I work in a team, either as leader or senior developer, I try to absorb blame and share praise. If someone praises me I'm likely to mention what another person provided that helped me. This helps me to work with other people, because they know I will accept responsibility for mistakes and will not throw a colleague under the bus. I also won't make up stories and try to lead people on merry chases, when they try to figure out what's going on in a project milestone, a team meeting, a product review, or a coding task or a bug.   Lies can include omissions.  Leaving something out of a commit comment could be a form of lies.  I think that humility and honesty are only possible in tandem. When I become proud, and arrogant, which I sometimes do, because I am only human, it is the desire to be honest that helps me pop the bubble of pride.  When I am able to be humble, I find that my conscience prompts me to mention things that are less than flattering. For example, "I fixed this bug, but I am worried that there might be a side effect and I don't know how to prevent this side effect, but if I tell someone now, I think it might be better to discover the side effect now, rather than try to hide the fact that I may have created a bug with this bug-fix, and I don't want people to blame me".

3.  Model and reflect moral Goodness, and presume good will in other people. I try to expect the same virtues from other people, that I expect of myself, and if I do not see that happening I try to privately, and positively, encourage the behaviours in others that I expect in myself. I find it is possible to encourage fellow developers by modelling my willingness to accept blame, but not attacking people ever.  If I see someone attack a person, instead of attacking a problem, or doing something that is making an issue personal instead of professional (we should fix this issue, versus, you are bad, and should feel bad, because you made a mistake or didn't realize that this is bad) I try to encourage a return to objective, practical solution of the actual problem. I try to preserve good will because it's very hard to get it back when it's gone. I  think teams need good-will and goodness to each other even more than they need technical competence, and by gum, they need technical competence too.

4. Achieve technical mastery the only way it is ever achieved, by lifelong learning.  You don't know everything, and neither do I, and even what you do know today, you will forget tomorrow.  I don't know jquery. I  don't know Big Data. I don't remember very much X86 machine language anymore.    And you know what? I don't have to.  The most important technical skills I have are how to learn something new, and how to bisect problems until I find their root causes. With the ability to learn new things, learning is a joy instead of a burden. If you don't like learning, and you don't like bisecting technical problems until you find the root cause, you chose the wrong career,  my brother or my sister, I suggest you find another.

5.  Follow the Boy Scout rule, even if you're a girl. Be sure at the end of the day that you leave the world you lived in, the job you worked at and the codebases you contributed to  better after you worked on them, than before you worked on them.  Bob Martin calls this the "Boy Scout Rule", referring to that "leave the campground cleaner than you found it", but pointing out that the "campground" Baden Powell was referring to was actually the whole world we live in.    I think that the real Professional looks at the ends and the means, the goals, and the values involved in all the work he or she does, and asks "am I comfortable with this, is this the best we could be doing, and are we doing this the right way?".    If I got paid to make a mess, and to contribute to a project or a company failing completely, I do not obtain any sense of wellbeing or personal pride of a "job well done" from that. I wish to do good, to be part of a team that does good, and which builds not only economic value for my employer, but also contributes to the complete wellbeing of the team, the company, its customers, and the wider world around me.  This is all part of being a good human being.

Why do I share this with you? Because I want to do something with my life that is good. I want to be worthy of what I get paid, and more. I want to be the kind of person who anyone would hire, and hire again.  Don't you?  What do you think? I welcome your thoughts.  Do you ever reflect on the meaning, and purpose of your life as a software developer? Because I do.  Am I crazy for doing that? I think it's crazy not to do that.

Software development is not a profession in the sense that Medicine, Law, and Engineering are professions.  Not yet anyways. But if we can learn how to become professionals, it will be good for everyone in the world. Even if the software you write doesn't cause airplanes to fall out of the sky, or accidental acceleration of your Toyota Corolla when it fails due to a lack of tests and testing, there are still many other good reasons to be a Professional who cares about the whole quality of the enterprise we are engaged in. That means caring about TDD,  or any other thing, if and only if, it leads to the desired goal;  To do good things, to do them well, and to be responsible in the use of the tools and the time we are given.







Saturday, November 9, 2013

When all you have is a hammer, everything looks like a nail.

I frequently run across questions like this one on Stackoverflow.  The person is asking how they might avoid hand-coding complex INSERT statements, using TADOQuery.   My comment in this question reads:

You might find it easier to use a dataset (TADODataset or TADOTable) to do a dataset-like-job, and use a TADOQuery to do a query-like-job. Oh, and there is TADOCommand to run a Command. So if you're writing SQL "insert" strings, you might want to look at running those with TADOCommand. And you might want to avoid writing them at all, and just set field values and insert into a TADODataset.
 This is a common coding anti-pattern:

The "All I Have Is A Hammer, So That Must Be a Nail" Anti-Pattern:

  1. Try something, it works on Tuesday July 1st, 2001 to solve the problem you had on Tuesday July 1st, 2001. 
  2. This is now the Standard Way You Do Everything Until the Day You Retire Forever from Coding.
  3. Do not reflect on whether you are forcing round pegs into square holes, but simply continue to develop a large body of "cargo cult" programming practices that were functional once, and are optimal almost never.

Why do we developers do this?  Because we know how to build one solution to a problem, we often stop looking for more ways to do this.    Learning about using the IDE (especially the debugger), and learning about using (and writing) components, understanding the entire Pascal language, these are all dimensions of learning to be a Delphi developer.   As a single developer learning, the pitfall is to stop too soon.

As a team, this pitfall morphs into a collective trap.  Developers working on a team must communicate "rules" and "best practices" to each other, and must agree and work together.  Working together is a whole other topic that I wish to brush past, but let's just do that, and ask, "how do teams create and modify the set of established rules and conventions they use, and do they ever fall into trap B while they try to avoid falling into trap A?".   Do rules that teams make, ever become "dogma"?  Are they written without qualifiers, or exceptions?  If so, then your team practices can force developers into the "Everything is a Nail", so "Hit it with the Hammer" trap.
 
I can list about 20 common "Hammer/Nail traps" that I have seen developers fall into, and they usually involve the word "always" or "never", and seldom involve the words "think about it", or "use when appropriate":

  1. Never use EXIT
  2. Always use EXIT
  3. Never use GOTO
  4. Always use GOTO
  5. Always put begin and end and never use single statements.
  6. Never put begin and end around single statements.
  7. Always use Data-Aware-Controls.
  8. Never use Data-Aware-Controls. (See Footnote)
  9. Always use Test Driven Development.
  10. Never use Test Driven Development.
  11. Always use Objects, Generics, and Gang-of-Four Patterns, and write your Pascal code like it was Java.
  12. Never use Objects, Generics, or Patterns, and write your Pascal code like it was Turbo Pascal and it's 1989.
Why do developers get stuck in the "always do X" and "never do X" patterns? Why do we argue and fuss about these rules?  I would like to suggest a constructive idea about why:

Solving problems is hard.  When the solution space (the number of things you have to try) gets large, your brain shuts down.  By closing down the "Go Left, Go Right, and Go South" rules in your brain, and leaving only the "Go North" rule, as the One Rule you always follow, until you hit a wall, at which point, your "Rotate Counterclockwise 90 degrees" rule kicks in, developers find it easier to navigate the complex maze of decisions that we make every day when we seek for solutions to our coding problems.   When a solution has worked many times in the past, we sometimes promote our solutions into dogma, like this:

1.  Once, I found it confusing that someone put an exit statement in the code, and I didn't notice it when reading the code, and so the flow of the program was confusing to me as a human being even though it was not confusing to the compiler.
2.  Due to my inability to read and see Exit statements, they are similar to Goto statements, and since they are similar to Goto Statements, and Goto Statements are "Considered Harmful", as everybody knows, they go on the no fly list.

Now let's look at code written without exit statements:

procedure  TMyForm.MyButtonClick(Sender:TObject);
begin
  if ValidationFunction1(param1,param2,param3) then
  begin
     if ValidationFunction2(param4,param5,param6) then
    begin
        if ValidationFunction3(param1,param2,param3) then
        begin
        if ConfigurationStateDetection(param1,param2,param3) then
        begin
           DoSomething;      
        end
        else
        begin
           DoSomethingElse;
        end;
    end;
  end;
end;

This could code be written better as:

procedure  TMyForm.MyButtonClick(Sender:TObject);
begin
  if not ValidationFunction1(param1,param2,param3) then
    exit;
 
  if not ValidationFunction2(param4,param5,param6) then
    exit;

  if not ValidationFunction1(param1,param2,param3) then
    exit;
  
  if ConfigurationStateDetection(param1,param2,param3) then
  begin
      DoSomething;      
  end
  else
  begin
      DoSomethingElse;
  end;
end;

The above is intended to be a sample that is an order of magnitude less complex than the worst "nesting of begins and ends" that I have seen in the field, at places where "exit" is discouraged or banned.   Practice 1 (which has some merits, I understand about exit being confusing to developers), causes Problem 2.

If you're going to make the rule "no exits", then you should have a better solution for a flow chart like this, that is better than nesting 30 blocks deep with begin and end:



There are exit-free solutions but most of them are far more baroque than just using exit, and if you are comitted to your rule "no exits" you should choose one, and make sure people know how to construct something that is less of a mess than a block of 30-nested begins and ends.  But then, when you're done, ask whether that finite state machine engine you invented isn't just your brain hiding an Exit-like and Goto-like language statement underneath some new layer of bafflement.

Let me suggest a better set of rules, either with the always/never removed, or at least, weakened with an appeal to developer rationality.



  1. Never use EXIT, when something better exists.
  2. Always use EXIT, when no better solution exists.
  3. Never use GOTO, when something better exists.
  4. Always use GOTO, when no better solution exists.
...  I think I can leave the rest of the list as an exercise for the reader.

So here is my Rule about Coding Rules:

A.  When you think of a coding rule or best-practice, keep your mind engaged while using that practice, and look for places where that practice creates as much or more trouble than it solves.
B.   When you share your ideas about coding rules within your team,  add conditions like the ones I added above in blue, that make it clear to team members that these are not Cargo Cult practices, but active thought processes that the team uses to develop software.

As a single developer in a team, it's your job not to create endless waves of complaints, but when you see something that is not working, or which is causing problems, it's important to find ways to discuss these issues.  At all times, you should avoid insults or personal comments.  If the team's best practice is to indent or format or organize code in some unusual way you've never seen before, and which you find yourself unable to understand and deal with (something I experienced personally at one place I worked),  you may find yourself tested to the very limits of your ability to cope with the zeitgeist at that team. I know I did.  If you're like me, and you like there to be a "why" or a reason for things being the way they are it may frustrate you to no end that people are in fact, not automatons whose behaviour can be predicted or explained by mere rational analysis, and often do the same things over and over for reasons that may be inscrutable to you, not knowing or caring why they do them this way. I'd like to reiterate why teams are like this:

People do things the same way over and over because that's how the human brain works.

Becoming aware of this, and trying to point out that people are hammering screws into plywood with a hammer, because all they've ever used before is a nail, is a bit of an extreme metaphor, and seems insulting, really, at some level.  But we have to remember that "nails" may have been 99.9% effective in all the places where developers have tried to use them.  Your team might think "TADOQuery is good, and TADOCommand and TADODataset and TADOTable are bad", for instance, because that got you out of some bad situation once, and now that is your rule.  But ask yourself, are you being introspective and is your team able to discuss this, or do you just "put your head down" and not ask questions, not think outside the box.

Going back to the StackOverflow question that inspired this post, I would like to point out that I am not trying to pick on a new Delphi user who is just learning.   But I do find that I myself sometimes fail to learn all that could be learned about a tool or an environment, and I tend to repeat using the same pattern or solution myself, and that I notice this problem in myself, and that I humbly offer this reflection to you, if it is of value:

Ask yourself, every day: "Are there ways to do this that I have not thought of?", and "If I don't know a better way but I sense something is off here, can I ask a colleague for suggestions, and could we find some more efficient or more optimal way to do this?".    So, in the end, the StackOverflow person who asked this question is more right, and more of a good example, because he or she asked a question, and got a lot of feedback from the big crowd of Delphi Geeks on the Internet.

That might be the smartest development best-practice of all.



Friday, November 8, 2013

Surface RT takes a diRT nap

Windows is an operating system for desktop computers. It runs fine on Laptops, but the move to a closed hardware platform (like RT) with a codebase like Windows has lead to the modern anomaly known as Windows RT.

I bought a Surface RT with the intention of learning WinRT development (in Oxygene or C#) with it, in September.    I upgraded without incident to WinRT 8.1.  Then, some time yesterday, the Remote Update gods at Microsoft sent an update to my machine, which bricked it completely.

I have initiated an in-warranty exchange online with Microsoft.  I decided to try to buy an extended warranty on my WinRT surface device.  No offers are available from Microsoft at this time for the first generation Surface RT.

Is that not a tacit admission by Microsoft that the first generation Surface RT is a ticking time bomb, of uninsurable proportions?   Let's see.  Secure boot. Remote update.  Battery rundown.  Let's let it update when it's not plugged into the wall.  Can  anyone guess the unavoidable end results?  It should take about 30 seconds to figure it out. Microsoft's engineers obviously couldn't.

Update: A new surface arrived, and the old one was shipped back.  I'm quite happy with Microsoft's exchange program. However now my Surface RT locks up randomly during use.   Perhaps I have received a "refurb" unit that has a problem?  Lovely.  I may try to contact support again and ask them about it.

Wednesday, October 30, 2013

Quotes and Sayings for Software Developers

I saw this today at Forbes.com:

 They are happy men whose natures sort with their vocations. ”
— Francis Bacon

I think the above is a particularly good way of explaining why I feel very happy to be a software developer. I love what I do, and so, I find, it is a lot easier to get me to do that work, than it would be to get me to do something I did not like doing.  And in the end, not only am I happy, hopefully everyone, including the business owner that I work for, and the customers of that business, can be happy to, because they each have to get what they need too.   I think there really is a temperament that is suited to software development, and that its pretty safe to say that I have it.   

Another quote that I think is apropos for the professional software developer, which I keep quoting, without knowing where it came from, and which is similar to, or dovetails with the Francis Bacon quote is:

"Hard work becomes easy when the mind is at play"

Unfortunately I do not know the origin of the quote.  I could be quoting myself, or Benjamin Franklin, or Abraham Lincoln, or Mark Twain. Who knows.    The meaning of the quote, to me, is that if a person is unable to enter into, and enjoy the technical challenge or mastery of a skill that is required to do a job well, then not only will they hate doing the work, they'll do it badly.  This is another way of saying that temperament affects outcomes, at least in our line of work.  Incidentally google and all the quote databases come up empty when looking for quotes about a mind at play.

Please share your favorite quotes about work,  being a software developer, and anything related to the temperament or working lives of developers.

Update: Apologies for the formatting mistakes on this post, and any eyestrain caused by them!

Tuesday, October 29, 2013

Delphi Experts and IDE Plugins I Love Part 3: Model Maker Code Explorer

ModelMaker Code Explorer is one of two Delphi add-ons that is commercial that I usually buy, the other is MadExcept.   In part four, I will try Castalia, which has a loyal following as well.  But given the choice between the two, I like ModelMaker Code Explorer.



The main things I like about it are:


  • It has the best Class Browser and navigation and structure-view features of any Delphi IDE plugin.  Some of the features that I came to love in Xcode, which are also in GExperts but in less complete fashion, are built into the class browser and navigation features of MMX.
  • One feature that I use all the time is the Uses Clause formatter.  I like my uses-clauses one-per-line because it decreases the incidence of merge conflicts when people just insert units into the middle of complex uses-clause declarations.  It's also useful for when you group your units into sections, as I often do.    Once I have grouped dependencies, the structure of my system becomes clearer.  This is a precursor to making the design simpler, or easier to test, or otherwise better.  Perhaps if your uses clause takes up 300 lines of code, it might help you realize you're building a big ball of mud and that you should start cleaning it up, instead of making it worse every day you work on the code.
before

after (MMX + some manual work)


  • Another is a feature under "Text Tools", that will take something you paste from the clipboard and format it as a multi line string literal. For example, I often use it to take SQL that I wrote in SQL Management Studio and paste it into a .pas unit, as a multi-line string literal. Because Delphi lacks and easy "here-document" syntax (like the triple-quote syntax of Python), this is a considerable time-saver.
  • It contains a lot of good refactoring features that the base IDE cannot do, but I actually hardly use any of these. Instead, I find the "live metrics" and other "analysis" features, which shows "stuff which needs fixing or attention" is far more useful. I tend to leave a lot of TODO items around, and while Delphi contains a TODO feature, having that panel open is a waste of space. But having one panel with a comprehensive list of areas that need attention is much more useful to me.   "Metrics" is a poor name choice, in my opinion, it's more of a "Lint" feature.  Lint is a term from the C programming world that means "additional style warnings", that can help you find problems.  While nowhere near as comprehensive as the features in Pascal Analyzer, MMX's "Metrics" features are like a second level of live hints and warnings. Since the live hints and warnings in Delphi often misbehave, I often leave "Error Insight" turned off in Delphi and use MMX instead.  The Event Handler Analysis feature has found dead code for me, and event handlers which have become "unhooked" due to accidental editing, which is a bug that RAD tools like Delphi are vulnerable to.   Delphi is far behind Xcode, Visual Studio, and most Java IDEs in its Lint/Hint/Warning features, but tools like MMX really help bring it up to speed.


  • I have learned a lot from reorganizing my large classes. For example, sorting the methods and properties in a mega-class, is often a good precursor to breaking up the class into a series of smaller, much better designed classes.  Having all the methods that start with "open" sorted beside each other, for example, might lead me to consider that having five methods named "openThis", "openThat", and "openTheOther" leads me to wonder why I have another method called "makeViewForSomething".  maybe I should rename that to "openSomething", since it would make more obvious sense than what I had before. What was I thinking when I called that "makeViewForSomething"?  Or if there was some reason why "makeView" was a better name, maybe all the "open" methods should have been "makeView" methods.   Thinking about whether or not the stuff in your method names makes sense, and is consistent or inconsistent is made easier, when you organize your classes and units.
Like many Delphi add-ons and tools, this one also has a free trial.  I highly recommend you download it and try it.  Perhaps the best feature of MMX, is that the guy who makes it, Gerrit Beuze, provides top-tier technical support for his products. If you find a bug, he'll generally fix it pretty fast.  I've been completely impressed with how he handles support and bug fixes.   A wonderful product, something I find it hard to live without now.


Tuesday, October 22, 2013

Embarcadero MVP list updates

Embarcadero has released a new list of MVPs. I'm very happy to be on that list, because Delphi has been my favorite language, and favorite IDE since it was first released. 

To prevent misunderstandings let me just point out that an MVP is an independent community member, not an employee of Embarcadero, I am not an employee or sub-contractor.

I believe very much in the importance of the product, and its value to the development world.   I blog about Delphi related topics, on my own time, and I don't get paid to do that.  Why?   You wouldn't see me doing that with Java or with C++, although they are good enough tools in their own ways.  

Delphi has passionate users, because it's offering something unique.  I believe in that something unique that it offers, even though there might be some tools out there that cost zero dollars.  So am I a corporate shill?  No, definitely not.  And is everything always roses around here? No.  But I've always been clearly pro Delphi, and I'm grateful for the recognition that the people who believe in and love this language, and IDE can make contributions to the future of programming, too.

How do I do that?  I think I do that by teaching, consulting, by working with other Delphi people, by building little helpers, add-ons, components, and tools that people can use.  By showing that this is a really great way to build software. 


There are some great folks on the MVP list.  Zarko Gajic, who wrote about Delphi on About.com for many years.  Francois Piette, who has built a startling and fantastic quantity and quality of components, classes, tools, and frameworks in Delphi and made them open source.    Nick Hodges, a well known Delphi Blogger,  who did a stint as Delphi Product Manager, and has had many many other fun adventures in the community, a great guy.  Ray Konopka, Delphi component book guy, owner of Raize Software, builder of my favorite logging tool (CodeSite), and the guy who taught me to write design-time component code with his Delphi component book, back in the Delphi 3 era.  I could go on and on. Primoz Gabrijelcic... Alister Christie...  you guys rock!  Lots more people on that list deserve a shout out, but I'll keep it short. 

Anyways,  all of these people, myself included, think Delphi is awesome, and that it has a bright future ahead of it.

Friday, October 18, 2013

TurboPascal PCODE compiler implemented in JavaScript

Anybody who doubts the sincerity of my last post (the PC desktop is headed inexorably towards a  legacy technology status) and who also is into Nostalgia, should check this out.

It's a mostly-complete TurboPascal compiler written in JavaScript that compiles to a UCSD-PCODE equivalent bytecode instead of native X86.  The resulting code runs in a UCDS-PCODE vm also implemented in JavaScript.

Now that's what I call a killer Blog Post. Read the blog post and try this over here.

Oh yea, and he posted to code on GitHub.  Anybody want to implement  Pascal RAD IDE in their web browser? Should only take a few weekends here and there.  I kid, I kid.   A desktop IDE, and a real compiler do a lot more, and take a lot longer to build, than this Just For Kicks thing did.  But if this person had realized that they could just download the real DOS TurboPascal from the Embarcadero/Borland Museum, and done that, then there wouldn't be a new JavaScript pascal parser out there.  Which is inherently a cool thing. 





Thursday, October 17, 2013

Windows as a Legacy System

I have been seeing charts like this one all over the place, but I think this one is telling.  It's from a presentation by a firm called KPCB.


First there's this technology cycles graphic:


Finally, here's the visual chart that goes with the above infographic:




So the WinTel era starts with MS-DOS in about 1982, and if you plot the line at the left side, with some kind of long tail, a gaussian distribution, which I think makes sense, you get this:


If that happens, here's what you'll see:

  • PC shipments will continue to decline, and the PC will become a tiny niche that represents about 10% or less of the overall computing world's annual sales.
  • Microsoft will continue to control the WinTel/PC desktop world, but the significance of that will shift almost entirely upstream to the Enterprise and large Corporate markets.


Yesterday, when I was having trouble sleeping, I did a search on a few Legacy Technologies that you might not remember.  Does anyone remember the HP e3000 family of computers, and its accompanying MPE/iX operating system?

Imagine your business ran on these boxes:



Then imagine you got a letter from HP saying that they're ceasing production of this hardware you rely on, and the software that runs your entire business on it.  They politely suggest you transition onto Something Else.   Your compilers are no more good. Your source code is no good.  Your tools and your skills are no good.

Okay enough fear mongering.    Such a thing is very unlikely to happen in the WinTel PC world, at least not before 2099, which is long after I've left this planet, and you too, probably.    

But we're creeping ever closer to the era in which Windows applications on a Windows PC are a niche object, something not everyone wants anymore.  Businesses will continue to use them until there are no more bits of hardware left that run them.  But the days of PCs getting twice as fast are already hard up against the nanoscale physical limits of the universe, and the days when the R&D budgets that drive this Moore's Law growth are going away too.

Soon, very soon, within ten years, Windows will be a shadow of its former self.  This, my friends, is why Microsoft is scared, and is reacting with Windows 8.0 and 8.1. You and me, ordinary developers who target Windows are going to be fine, just fine.    In 2200, there will probably still be virtual machines running some of the Windows software that you and I are building now.