#54: The Programmers Oath: I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should.

During September, I'm running a short survey to understand UK Executive's attitudes to custom software development. Please take the time and have your say at: https://software-survey.red-folder.com/

In this episode I continue to look at professionalism in software development.

I take the third oath from the Programmer's Oath by Uncle Bob Martin, introduced in episode #51, to explore further:

"I Promise that, to the best of my ability and judgement: I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should."

Or listen at:

Published: Wed, 02 Sep 2020 16:07:56 GMT

Links

The Programmer's Oath

Transcript

[00:00:35] Hello and welcome. In this episode, I want to continue talking about professionalism within software development and I'll continue talking about it through the lens of the Programmer's Oath by Uncle Bob Martin. In Episode 51, I introduced the Programmer's Oath by Uncle Bob. And the last couple of episodes, I picked up the first two oaths and talked about them.

[00:01:00] In this episode, I want to pick up on the third Oath:.

"I Promise that, to the best of my ability and judgement: I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should."

[00:01:18] What we're talking about here is quality. It's making sure that the work that we do as software developers is being done to a significant level of quality. That the systems work as we would expect them to, and that both the customer, the end customer and the organization and the team are getting most value out of the work that we're doing.

[00:01:43] Now, personally, I think that's a reasonable expectation. Especially from the organization's point of view. If the organization are paying for that software development to be done, which is going to be expensive, they can reasonably expect it to work, they can reasonably expect to be having a level of confidence that they're putting money into a premium product, they're spending good money and to expect a level of quality.

[00:02:16] Now, I actually question as to whether that normally happens within software development, software development traditionally has been very much siloed.

[00:02:27] So software developers will build the system. But their job isn't quality. That's for someone else to do. That's for the tester, that's for the Quality Assurance Department, maybe even the business to validate what they've done is correct.

[00:02:44] Now, think about that in any other line of business. You're paying someone a premium price to produce what should be a premium product. So shouldn't you expect a level of quality to go with that? So for me, proofs make sense, having some ability to prove that what's been done works and more importantly continues to work. So not just the change I've made today, the change I've made this week, but also any functionality that existed within the system before I touched it. We want to make sure that I haven't introduced some form of regression, some problem into an existing part of the system, as well as making sure the work that I've done now works as best as expected.

[00:03:36] In Episode 14, I introduced a number of testing types, I talked about Manual, where somebody will sit in front of the software and manually test it. I talked about unit testing, which is a development type test, which is automated and can test levels of the code, a very small level. I talked about integration testing, which again is largely a development test which is made, which is done to make sure the individual components of the system work together well. I also talked about acceptance testing, which is that business test where they're going to actually meet what we wanted to do.

[00:04:13] And there are actually other types of test out there, and it isn't necessary that this proof has to be any one of these tests. It could be a combination of all of them or something entirely different. As long as there is a proof, while I personally prefer to use a lot of automated tests, because that way you're gaining the value every time you use it. An automated test, you can start it running and leave it. You can run it repeatedly over and over again. And you can get so much more efficiency and productivity out of automated tests than you can manual. Then you can be confident.

[00:04:49] You can run it repeatedly and make sure you're not incurring problems. But, automation aside, that doesn't stop us from using some form of manual proof to make sure the system works, and I think this depends on the system that you're operating.

[00:05:03] The complexity of the system and size of the system almost certainly will come into that equation. If it's a big, complicated system, then I would certainly expect a number of tests across all of those different types. So certainly unit testing, certainly integration, certainly acceptance, certainly manual. If, however, it's a really small system, maybe some small utility, maybe it can be tested by a five minute manual test.

[00:05:34] So why wouldn't this be happening as a day to day activity anyway? And that's a good question.

[00:05:41] As I've said, I actually believe you're paying for a premium product, you're spending large sums of money on expensive people. Developers are not cheap. So you'd expect there to be a level of quality.

[00:05:55] The problem is, I think we've got quite a history of people looking at developers as their expensive resource and looking to utilize them on a cost versus time basis. That developer needs to be assigned to the most valuable thing and constantly working on the most valuable thing we can think about.

[00:06:13] We are actually looking at actually trying to utilize that time more than necessarily the value. So we're allocating them out and trying to get as much as we of out of their time as we can as such.

[00:06:25] Certain things like testing suddenly becomes seen as less valuable, or we don't want to spend time with the developer doing any form of testing. We push that to someone else that can do the testing. We can push that to a cheaper resource.

[00:06:39] Now, for me, that is an entirely wrong way of looking at it. And I've talked before about how expensive a problem can be the further it gets away from the developer. Before it's identified, if a developer finds a problem while they're in the middle of the development, they can fix it there and then. Yes, they're taking time to fix it, but, it's in their head. They know where they are, they know what's happening, they can fix it relatively efficiently and very cheaply.

[00:07:09] If, however, the process is one where the developer does their work and they pass it off to another team, a team that doesn't look at it or validate it for maybe another two, three weeks, by the time that team has actually found the bug, have found the problem and God forbid it's made it into production by the time they found the bug.

[00:07:30] You go back to the original developer, they cannot remember what they were doing to produce that bug in the first place. There's suddenly in a position where they're actually having to spend a lot of time to try and remember what it was they were doing when they actually produced the problem.

[00:07:45] Or it may even be someone different, someone who has no actual knowledge or understanding what the change was. So they're having to spend all the time picking that up. And of course, if it's made its way into production, you're also then incurring any costs that bugs incurred within production for your customers.

[00:08:05] So I actually think it's the wrong thing to do, to think about developer cost when it comes to testing. I actually think they should be doing a lot of these proofs, but as I say, a lot of people have thought that's the wrong thing to do.

[00:08:17] I've even heard stories of software developers being disciplined through H.R. procedures for introducing unit tests.

[00:08:28] Think about that for a second. A software developer has done what they believe is the most professional thing they can do at the time. They've added unit tests to software they have written.

[00:08:42] Now, the H.R. department think "That's worthless. Why are we doing that? That isn't their job. Testing is done by this QA Team".

[00:08:51] But the software developer knows that if they can put the unit tests in, that's a much more efficient way. I'm not saying you still wouldn't put it through a QA department, but you're improving the quality of the software.

[00:09:03] But they where still brought up in front of the H.R. department and disciplined because it's not their job and they're wasting time. They are wasting company resources and money, which effectively is their time. They're being paid. They shouldn't be doing that work.

[00:09:19] As I say, this for me is a farce.

[00:09:22] There's a clear basis of evidence that putting time and money for the developer into those units, into that acceptance, into integration tests, anything that has a level of test that can be automated - has significant payback. Has significant payback in both the quality of the product as well as then the continued repeatability of testing for that product further down the line.

[00:09:48] It's interesting because it almost speaks back to that gold plating I talked about in Episode 52. And this is one of those areas where I have seen pushback, as demonstrated by the disciplinary, where the business don't feel that the work is required. That they think developers are going too far. They're gold plating the process. They're doing work that isn't actually required for what they're paying for.

[00:10:15] And as I say, that's unfortunately wrong. It's not a gold plating exercise, it's delivering a level of quality.

[00:10:24] It's making sure that you can prove that software you're written works. And as I say, continues to work. So, yes, there is upfront cost in producing it. Yes, there is upfront cost in actually building those tests. But in the long term, there is a return on investment.

[00:10:45] But, of course, it comes down to choice again, and we're back to that technical data argument as a business, you may choose "No, don't do the testing. Let's just get the feature done".

[00:10:56] Are you accepting that if you are doing that, you are actually taking a debt for a longer term? You're taking software that is going to be poor quality, have more bugs, be more difficult to maintain, be more expensive to operate in the longer term to win that short term win?

[00:11:14] Now occasion that can be the correct thing to do, it could be that you've got a major deadline that you have to hit it. Some make or break trade show. Maybe if you don't hit that, then the business ceases to exist.

[00:11:28] Most of the time, that isn't true. Most of the time dates put in place are arbitrary. And there's somebody at some level above that goes "I want it by X date".

[00:11:37] And that's where professionalism should be pushing back and saying "no".

[00:11:42] And this is why we want to be able to make sure it's a quality product. We want to make sure that we can prove that it works, is appropriate, and we want to make sure that is correct.

[00:11:54] It is true that it's never going to be possible to provide full proof. There is actually the term known as for mathematical proof. But very rarely are ever going to be even possible within software development. That's just simply too many variables. So while you should be doing all of this to try and prove as best as you can that the software is working as it should be, it obviously is never still going to be 100 percent - which is why I've talked previously about monitoring and logging for production systems to identify any issues that then do make it in - because they will.

[00:12:32] But the key here is trying to make sure as much as possible you've got as much of that quality found early. So any bugs, any faults, any issues are found much earlier in the lifecycle and thus so much cheaper, much more effective and much more productive to resolve.

[00:12:52] And again, that one's going to come back to the best of my ability and judgment. It's that balancing act of what your judgment is telling you is the right way to invest time, effort and money at any given point in time.

[00:13:09] In the next episode, I'm going to take a look at the next Oath. I'm going to take a look at.

"I Promise that, to the best of my ability and judgement: I will make frequent, small, releases so that I do not impede the progress of others."

[00:13:27] Thank you for listening and I'll speak to you again next week.