Iterations and releases

  • 3-week iterations (1-week iterations proved to be too intense)
    • Actually an iteration starts on a Monday and ends on the Thursday of the 3rd week, so there are 14 working days in an iteration
  • An iteration is identical to a Trac milestone
  • We release working software at the end of each iteration/milestone
  • 3 releases until Feb. 23rd (PyCon'06 tutorial day)

Story-driven development

Many of the following concepts/ideas are borrowed from User stories applied by Mike Cohn.

  • Features are developed as stories
  • A story should fit on an index card and should be expressed in end-user language, not in technical jargon
  • Examples
    • Users can browse messages by thread
    • Users can conduct simple searches with multiple words
    • Users can conduct advanced searches (within specific headers, with logical operators)
    • Users can comment on specific messages and threads and can search on comments
  • Each story needs to be estimated in story points
    • A story point should roughly correspond to an "ideal" work day (i.e. whatever amount of work you could put into it if you had a whole working day to dedicate to it, with no interruptions)
    • Stories should be typically 1 to 4 story points in size; if they're bigger than that, they need to be split
    • Since there are 2 of us, there are 28 man-days in a 14-day iteration
      • Mike Cohn recommends allocating 1/3 to 1/2 as many story points to an iteration as there are man-days in an iteration
      • This means that we should plan for 9 to 14 story points per iteration (give or take)
      • We also need to take testing into account when it comes to estimating a story
      • A good rule of thumb might be to allocate 2/3 of story points to development and 1/3 to testing; so if Titus estimates that a given story will take 2 story points, another point will be added for testing
  • The number of story points that are estimated to be achieved for a given iteration represents the velocity for that iteration
  • Stories can be created at any time, then prioritized for inclusion in the next iteration
  • A story needs to be decomposed into tasks
    • Tasks can be expressed in technical jargon
    • For example, a user story such as "Users can comment on specific messages and threads and can search on comments" could be decomposed in the following tasks:
      • Include Commentary code from Ian Bicking's project
      • Update Commentary code periodically so that latest bells and whistles are captured
      • Test Commentary functionality using the Ajax capabilities of Selenium
      • Test that words included in commentaries appear in searches
    • The unit of estimation for a task is 1 hour; each task should typically take 1 to 4 hours of work
    • Adding up the tasks for a story should result in the correct estimated number of story points for that story
  • A story is not completed if it is not tested
    • Two types of tests per story
      • Unit tests -- both Titus and Grig to write the unit tests
      • Functional/acceptance tests (mainly at the logic layer, but also if necessary at the GUI layer) -- Grig (and potentially Titus) to write the functional tests
  • Progress within an iteration is tracked via a burndown chart, which plots the number of remaining hours to be worked on the tasks that make up the stories for that iteration -- see an example
  • Progress on the project as a whole (comprising multiple iterations) is tracked via 2 types of charts:
    • Planned vs. actual velocity chart (pages 119 and 120 in Cohn's book)
    • Burndown chart at the project level, which plots the number of remaining story points per iteration (page 121 in Cohn's book)
  • See also XPlanner for ideas on progress tracking
  • We can use Trac to keep track of stories and tasks
    • A story can be entered as an "enhancement"-type ticket
    • A task can be entered as a "task"-type ticket
    • When creating a story or a task as a ticket, we also put down the estimate (in story points for the story and in hours for the task)
  • We can take turns being the "tracker" on the project
    • during an iteration, the tracker updates the iteration burndown chart
    • at the end of each iteration, the tracker updates the project-level burndown chart and the planned vs. actual velocity chart

Continuous integration

  • We package the software continuously (every night) using buildbot
  • We also run unit test and a smoke test (a subset of functional tests) every night via buildbot
  • Before each release, we also run performance tests

Bug tracking

  • We file bugs using Trac's ticket system

XP/Agile concepts