90 Sprints for Capital Markets – Part 3

Welcome to the penultimate part of my series. This week I’ll walk you through an approach to the release process that worked really well in our clients’ environment – contrary to several popular ideas.

Myth #11: “In Agile you don’t plan beyond a Sprint”

Regardless of the process you’ve chosen, remember that you build software to bring business value on time. In general, Agile should allow you to reduce the time required to turn concept into cash. In investment banking however, monetizing ideas is not the only driver of the release schedule. Regulatory requirements need to be implemented within a certain timeframe. At the same time, in environments of connected services, we need to provide certain functionalities for other services that have their own schedules.

On the other hand, it usually makes sense to split a single epic into multiple product releases. One of the most commonly mentioned notions is “risk”. Large implementation projects are typically divided into step releases separately. Breaking changes are implemented in two phases using a feature switch in the first release and removing old code in the second one.  We usually have a shape of goals for a few Sprints and releases ahead. However, we don’t use a Gantt chart for that.

Myth #12: “You are not Agile if you don’t release every Sprint”

This rule actually means that you should keep your project “deployable” at all times. At the end of the Sprint, you should have a working version of some part of the functionality that brings value and gives you something you can show to the users. In general, it might be hard to align production releases with Sprints. In our situation, there are some non-technical limitations regarding dates on which a production release can happen. However, I know teams that need to release more often than once per Sprint – even daily.

Nothing stops you from deploying to a production-like environment (UAT, prod-parallel) continuously. For us, choosing a very simple continuous delivery process together with full deployment automation was the best decision we made in order to boost productivity.

  • Each feature branch pull request goes through code review and integration build
  • Once merged to master, it is built and deployed to test environments. Each build has a unique version
  • A build/version is promoted from one environment to another (QA, UAT, PROD) without a rebuild
  • Deployment is fully automated using Jenkins and Liquibase. No need to log into servers
  • One-click deployment works for production as well as on the date of the release

The actual release process is tested multiple times a day in test environments, hence there is no stress when it comes to a production release.

Myth #13: “It’s done when I’m done with development and tests”

The product owner signs requirements off, then hands them off to a designer. The designer hands them off to the development team, then the developers hand them off to QAs, who sign them off for UAT, and then UAT tests are needed for release sign-off… If your process looks like this, then you’re doing Waterfall. Just to remind you, Waterfall was first established in 1956 by Herbert D. Benington, and later defined in 1970 by Winston W. Royce. If your team is still following it in 2018, please consider that this process might be a bit outdated.

As I’ve mentioned before, there is a difference between developers who convert requirements into source code and a team that helps solve business problems — and business problems need to be solved in production. A Team works on each User Story from initial client discussion about the actual business need, long before the start of development, to the actual production release and post-release support. If you find yourself in a situation that your team is spending a lot of time fixing defects and implementing new requirements for User Stories that were “Done” and are now back, you might need to fix the process first. This won’t get any better and will keep breaking your Sprint plan until you change the way of thinking.

The common Definition of Done (DoD) usually looks like this:

  • Covered with automated unit and integration tests that check Acceptance Criteria
  • Code passes static code analysis tools
  • Code Review passed (tests are code!)
  • Sufficient user and support documentation in place
  • Verified in QA environment
  • Accepted by users in UAT environment

Until all items are covered, a User Story is “in hand”, under development. Achieving the last step within a Sprint may be problematic and may rely on the time that business users can spend on UAT testing. To limit the time spent on fixing defects found on any of the steps above, we’ve started categorising bugs and defects into several categories:

  • Code – bug in the code, crash with an error
  • Behaviour – code works, but differently than described in Acceptance Criteria
  • Requirement – code works just like requirements say, but this does not solve the business need
  • Data Corruption – code works correctly but the data stored or migrated does not meet the constraints
  • Test – wrong test scenario
  • Not-a-bug – anything raised by mistake to be filtered out from results

Analysing and fixing each defect extends the time from the beginning of development to the stage of actual “Done”. Analysing categories will help identify where to put the effort first to improve the process. You may need to focus more on analysis and identifying actual business needs upfront in order to avoid missing or wrong requirements. Or perhaps you need better test scenarios or more detailed code reviews. These are the constraints that will prevent you from shortening the time to market until addressed and fixed.

And what if an issue happens in production? After the bug is fixed in production, spend some time on the post-issue retrospective. Of course, I don’t mean a session of “Who did this?” Instead, think which part of the process (or your Definition of Done) can be improved to avoid such a situation in the future. Make this conversation open and respectful. Share your findings and plan with other stakeholders to gain their trust. “Celebrate” failure rather than trying to hide it. Remember that it is also the Team who should understand business needs: ask questions, challenge requirements, check corner cases and tough scenarios to ensure that Acceptance Criteria actually solve the business problem.


Continue to the final part of the series, where we will take a look at efficient estimation and time tracking.

Hybrid and multicloud

Learn how cloud and multicloud drive transformation!

Download now