Saturation vs MoreIsBetter Scoring for SideEffects

Imagine two different types of scoring scenarios

  • Hours – number of man-hours it takes to complete a complex software creation schedule
  • ScalesUp – feature of software that makes it scale to near infinite number of users

Both of these are actually Saturation based scoring models, but only theoretically. In practice, there is no hope of ever reducing the number of hours to zero, so really hours is always a MoreIsBetter model – the more efficient the process – in this case the least number of hours, the better.

  • 2863 hours: good
  • 2714 hours: better

But ScalesUp is like most other scoring aspects of creating software. It is a goal which is actually quite attainable in a very finite sense. Once you have built software which can and does deploy to near infinite scale, you are pretty much done with that. It works, and more, in this case is not better. You can’t improve much on perfection. This is the Saturation model, and that model can be easily represented as a 1-100 score, where 100 is the highest score.

  • score of 0 – 0% complete
  • score of 70 – 70% complete
  • score of 99 – 99% complete

We can conclude then that each of these fits the opposite scoring model

  • Hours: MoreIsBetter model
  • ScalesUp: Saturation model

Scoring the Saturation Model

This is something that is not obvious until you start the process of scoring for Saturation in a specific score. It takes a different kind of thinking to score against a Saturation model when there are a number of different options that each contribute to a score.

Arbitrary Subtotals

When scoring for saturation, you know that the score can never exceed 100, but you also know that many different activities may be required to meet that goal. This might spread across several options, or in the case of Smoslt: SideEffects as Options

Let’s take an oversimplified example of scalesUp score:

  • Scalable Software: software written to operate across as many machines as required
  • Cloud Provider: relationship with a vendor to provide as many machines as required
  • DevOps: deployment written to recognize and react to increased need

You can’t get to 100 without each of these aspects being complete, so really you have to be able to score each separately. The percentage of score allocated to each is arbitrary, that they should total to 100% is not arbitrary – it is a given.

  • Scalable Software option: 25%
  • Cloud Provider option: 17%
  • DevOps option: 68%

Only now can you begin the process of scoring. For each of the above 3 options, you have the tasks of separately scoring completion against their respective target.

Saturation Based Scoring Within a SideEffect

To explore what it means to score a specific option that affects a saturation score, let’s look at the simpler option of Cloud Provider above. Note first, that this option may affect several different score types, including all of these below, most of which are saturation based scoring models.

  • Saturation
    • ScalesUp
    • ScalesDown
    • Durability
    • ManagerSpeak
    • FeedbackSpeed
  • MoreIsBetter
    • Hours
    • POLR
    • LongTerm

This breakdown may or may not be representative of what really belongs in Saturation based scoring, but let’s take one that is pretty clear – ScalesUp. This is because an app either scales up properly or it doesn’t. It can easily be measured and tested, and it is pretty obvious when it fails.

So now we know that within ScalesUp, repeating from above, each of these options contributes an arbitrarily apportioned part of this score:

  • Scalable Software option: 25%
  • Cloud Provider option: 17%
  • DevOps option: 68%

If we look at only the CloudProvider option, or side effect, we then need to do these things

  • Set the apportionment of the score at 17%
  • As the necessary work is complete, increment score against that 17%
  • When this work is complete, you have fully incremented this score by no more or no less than 17

 Another Challenge of the Saturation Modeling: 100%

Saturation is based on an index of 0 to 100%. This sounds perfectly logical, and it can even be logical in implementations. Take ScalesUp for example. If you are using Cassandra for your persistence store and have the appropriate use case, at least for the persistence piece you are at 100%. Cassandra scales linearly right out of the box. Can’t get any closer to 100% than that. So if you have Cassandra in your mix, make your ScalesUp score 100%. Right?

Not so fast there, buddy. Let’s take another look.

Your project has a lot more pieces than just a persistence score. Each of these pieces can wreck a ScalesUp score. If your persistence score scales perfectly, but you have a web tier and a messaging tier too, and they don’t scale up well, then you’re not at 100% yet. So now you have to modify your scoring such that web tier an messaging tier and persistence store together add up to 100%. Is that a third each? Or does web tier get 50% and persistence store 40% and messaging tier get 10%. Good question.