Friday, July 29, 2011

Introductions

Hi there, I'm Alec Munro. For various reasons, I've been unable to engage online for the last several years. That is no longer true, and I find myself with time, so here I am, engaging (well, I hope so).


So what am I trying to engage with? Well, I'm a software developer, interested in refining the process of software development to produce higher quality software. In practice, my skills and experience tend towards Python and Web Development, with the last 5 years being spent in the realm of test automation. Let's break that down...

When I say I want to produce higher quality software, I'm defining quality as "that which meets or exceeds my customer's expectations". This is a definition I'm stealing from someone, but which feels correct to me. There are two implicit conditions in that definition that go beyond simply writing good code: that you know who your customers are, and that you understand their expectations. So for me, developing quality software is a very holistic process, that encompasses the entire lifecycle of a desired piece of functionality.

I've been developing web sites for 14 years, starting with static HTML and JavaScript, moving through Flash and PHP, and finally "settling" on Python about 8 years ago. Since then, web development has almost never been my primary occupation, but it has almost always been a tool I could use to improve the projects I was working on. In the Python world, I started with Zope 2, moved to Zope 3, then Grok, and most recently Pylons and Pyramid. I'm biased towards building everything as a REST-ish web service, with a UI built in JQuery.

I came to test automation because I saw a need. I saw (seemingly) automatable tasks being performed by manual testers, and I took action to address this. I learned, with great difficulty, that many things I had assumed could be easily automated were actually terrible candidates for automation, often due to the development process that produced them. I learned about "testability" (thanks Misko!), and the role that test automation must play in driving development process improvements. I also gained an appreciation for what I call "proactive transparency", which is the philosophy that much of the information people need is too difficult to find, because those producing it don't know how to publicize it (or don't appreciate the need to do so). In test automation, this applies to results of testing, but I've found it applicable to many other areas.

So that's a rough overview of who I am, and what I can do. But where's the engagement? Well, I find myself without a job, but with a bit of a margin before I need a job. So I thought I would take this time to write on subjects I know about, as well as to study and report on those that I don't. This will hopefully give me the opportunity to broaden my skills (specifically for interviewing purposes, I will admit), while also re-engaging with the development community that I have been absent from for several years. For you, the reader, hopefully some of these topics will interest you, and you will participate in my discussions, to correct me where I err, or weasel further answers out of me when I am unclear.

What kind of topics? Well, I have no formal computer science training, so there's a couple of areas that tend to come up in interviews that I don't think I give satisfactory answers on. These include:
  • Design patterns. I know many of their names, and I know I use them often in my work, albeit uncredited. I will research the most common ones, and invent problems where they are applicable, and explain why one over the other.
  • Sorting. Quick sort, merge sort, etc. Using Python for web development, I've never been held back by the performance of a sorting operation, but I understand that these algorithms are considered quite foundational, and I know I've suffered in interviews due to the lack of them.
I will also go into some topics that are interesting to me, and play to the strengths I have developed during my career.
  • Mock library comparison. I consider the use of mock objects to be an essential (and often neglected) part of almost any software development project. I also think there are too many Python mock object libraries. :) I will discuss this in more detail, and show examples of the use of each library, and try to illustrate the strengths and weaknesses of each.
  • Web application development. Because of my recent work history, I have virtually no public code to share with potential interviewers. So, purely as a vanity project, I will create what I consider to be a properly tested and documented web application, and write about the process. I'll try to make a useful application, but I can't promise that.
  • RabbitMQ, ZeroMQ, Gevent. I keep running across these things, but I've never had an occasion to do much beyond skim their docs. They seem interesting and well regarded by those who are well regarded by me, so I'll look into them.
  • Go (the language, I'll leave the game to my father, at least for now). I've heard being laid off is a great time to learn a new language, and two of the jobs I've seen that would be most interesting mention Go, so I think that will probably be it for me.
To further my ideal of engagement, I will post any code I write for any of these in github (also new to me), and encourage contributions from the community. Hopefully, the community will find enough useful in what I create to do so.

2 comments:

  1. Hello,

    What you tell about publishing test information interest me.

    I currently work with infrae.testbrowser, and I am interrested in finding a way for my tests to be able to produce a human (say *client*) readable report of actions done during a test, the same document you will give a human tester.

    I think using code to create test is ok, but my client can't validate what my code does, with such a document, he could.

    ReplyDelete
  2. Hi Alex,
    I'm not sure I've ever used infrae directly, but the name sounds familiar from my early zope days.

    Much of the testing I've done recently dealt with navigating a UI, so we were also quite concerned with understanding exactly what happened during the test, and what steps were taken.

    To deal with this, we added a layer between our test cases and our UI driver, that would record distinct actions. For each action, we would record the command, how long it took, whether it returned properly, and a message if it returned one. We also eventually extended this to allow defining higher level actions within our tests, in our case by using a decorator. So, for example, we might have a "go to page" action, that would take a number of steps to get to the targeted page. But in our test, most of the time we didn't care about these steps, just whether we were able to get to the page or not.

    So we gathered that kind of information during a test. In order to visualize it, we needed to build a custom UI around the results database. I wouldn't be surprised if there are open source projects that provide this, or something close to it already.

    ReplyDelete