Category Archives: Design and Intent

Saturation vs MoreIsBetter Scoring for SideEffects

Imagine two different types of scoring scenarios

  • Hours – number of man-hours it takes to complete a complex software creation schedule
  • ScalesUp – feature of software that makes it scale to near infinite number of users

Both of these are actually Saturation based scoring models, but only theoretically. In practice, there is no hope of ever reducing the number of hours to zero, so really hours is always a MoreIsBetter model – the more efficient the process – in this case the least number of hours, the better.

  • 2863 hours: good
  • 2714 hours: better

But ScalesUp is like most other scoring aspects of creating software. It is a goal which is actually quite attainable in a very finite sense. Once you have built software which can and does deploy to near infinite scale, you are pretty much done with that. It works, and more, in this case is not better. You can’t improve much on perfection. This is the Saturation model, and that model can be easily represented as a 1-100 score, where 100 is the highest score.

  • score of 0 – 0% complete
  • score of 70 – 70% complete
  • score of 99 – 99% complete

We can conclude then that each of these fits the opposite scoring model

  • Hours: MoreIsBetter model
  • ScalesUp: Saturation model

Scoring the Saturation Model

This is something that is not obvious until you start the process of scoring for Saturation in a specific score. It takes a different kind of thinking to score against a Saturation model when there are a number of different options that each contribute to a score.

Arbitrary Subtotals

When scoring for saturation, you know that the score can never exceed 100, but you also know that many different activities may be required to meet that goal. This might spread across several options, or in the case of Smoslt: SideEffects as Options

Let’s take an oversimplified example of scalesUp score:

  • Scalable Software: software written to operate across as many machines as required
  • Cloud Provider: relationship with a vendor to provide as many machines as required
  • DevOps: deployment written to recognize and react to increased need

You can’t get to 100 without each of these aspects being complete, so really you have to be able to score each separately. The percentage of score allocated to each is arbitrary, that they should total to 100% is not arbitrary – it is a given.

  • Scalable Software option: 25%
  • Cloud Provider option: 17%
  • DevOps option: 68%

Only now can you begin the process of scoring. For each of the above 3 options, you have the tasks of separately scoring completion against their respective target.

Saturation Based Scoring Within a SideEffect

To explore what it means to score a specific option that affects a saturation score, let’s look at the simpler option of Cloud Provider above. Note first, that this option may affect several different score types, including all of these below, most of which are saturation based scoring models.

  • Saturation
    • ScalesUp
    • ScalesDown
    • Durability
    • ManagerSpeak
    • FeedbackSpeed
  • MoreIsBetter
    • Hours
    • POLR
    • LongTerm

This breakdown may or may not be representative of what really belongs in Saturation based scoring, but let’s take one that is pretty clear – ScalesUp. This is because an app either scales up properly or it doesn’t. It can easily be measured and tested, and it is pretty obvious when it fails.

So now we know that within ScalesUp, repeating from above, each of these options contributes an arbitrarily apportioned part of this score:

  • Scalable Software option: 25%
  • Cloud Provider option: 17%
  • DevOps option: 68%

If we look at only the CloudProvider option, or side effect, we then need to do these things

  • Set the apportionment of the score at 17%
  • As the necessary work is complete, increment score against that 17%
  • When this work is complete, you have fully incremented this score by no more or no less than 17

 Another Challenge of the Saturation Modeling: 100%

Saturation is based on an index of 0 to 100%. This sounds perfectly logical, and it can even be logical in implementations. Take ScalesUp for example. If you are using Cassandra for your persistence store and have the appropriate use case, at least for the persistence piece you are at 100%. Cassandra scales linearly right out of the box. Can’t get any closer to 100% than that. So if you have Cassandra in your mix, make your ScalesUp score 100%. Right?

Not so fast there, buddy. Let’s take another look.

Your project has a lot more pieces than just a persistence score. Each of these pieces can wreck a ScalesUp score. If your persistence score scales perfectly, but you have a web tier and a messaging tier too, and they don’t scale up well, then you’re not at 100% yet. So now you have to modify your scoring such that web tier an messaging tier and persistence store together add up to 100%. Is that a third each? Or does web tier get 50% and persistence store 40% and messaging tier get 10%. Good question.

 

 

 

code guidelines/notes

guidelines

  • avoid anything that requires re-architecting later
  • do anything that can set aside straightforward additions later

 

here is my baseline

  • all projects are MJWA OSGi ready jars, but not running in OSGi container
  • all my projects do not use carefully architected checked exceptions
    • but rather just stupid RuntimeExceptions to fix later
  • do not do real testing, but do use junit to just get the methods running
  • do use neil ford’s composed method but only enough to keep my job easier, not religiously

 

here are my options

  • foo

 

my baseline vs other baselines

  • versus Marcos baseline
  • versus Matt baseline
  • versus ….

SMOSLT.main

Command Line Application for SMOSLT

Glossary

  • PL: ProjectLibre
  • [compliant]: ProjectLibre file which conforms to exact specifications expected for SMOSLT.stacker – see separate document

Features

  • import a [compliant] PL file and run SMOSLT.assume against same
  • something here about two files, one for baseline another for latest something
  • maybe something here about narratives or options or generating

SMOSLT.stacker [compliant] specifications

  • all baseline tasks
  • no automated options (generated by SMOSLT)
  • resources individually named, with exact group that belongs to
  • all tasks assigned with either
    • specified individual by exact name
    • specified group(s) by exact name, and count
    • comma delimited

Somewhere need to note that

 

cartoon of evaluating options process

from situation come up with type of … to look up prototype/template

copy template into my…

modify template to reflect situation

add resources to match task titles

add/modify predecessor relationships

add/modify resources to reflect situation

add/modify options modifiers to reflect situation

rerun to extend out into actual schedule

 

primary goal is allows you to not have to pick two

  • keep things fluid
  • not limiting visibility into options
  • allow you to have less than complete information

SMOSLT.options – OrGrouping

This or this or this option, but not more than one from this group”

What Or-Grouping is NOT:

SMOSLT, as it relates to software options, offers a way to evaluate where to commit your limited resources. Should I organize the build around a Continuous Integration server? Or commit those same resources, instead, to deploying my services to smaller linux container modules?

  • Continuous Integration
  • Docker Container Service Deployments

These concerns each require a commitment of resources, and I only have enough resources to do one, but neither do they overlap. I could do both, if I had enough resources.

What Or-Grouping IS:

If I decide to commit resources to Continuous Integration, I’m still not done with the comparison of options. That’s where or-grouping comes in. Consider these options for Continuous Integration servers:

  • Jenkins
  • Hudson
  • Thoughtworks Go server
  • Bamboo

I need to pick whichever one of these options makes the most sense for my organization.

I would never choose more than one of these, it’s an either/or choice. Pick one.

How Or-Grouping Fails in SMOSLT.options module:

The magic of SMOSLT.options is that, unlike it’s human operator, it can compare every combination of options given to it.

Yet this same feature, without or-grouping, has an unintended side effect. For example, it might cause Jenkins and Bamboo to be selected for comparison at the same time! Wrong! The human would know that you either use Jenkins or Bamboo to achieve Continuous Integration, but you would never use both in combination! SMOSLT.options has no way of knowing that, without or-grouping.

How To Use Or-Grouping:

Or-grouping is implemented via naming conventions. Consider again, the same list of candidates for SMOSLT.options to compare:

  • Jenkins
  • Hudson
  • ThoughtworksGo
  • Bamboo

To implement an Or-Group, we rename this same list as follows:

  • Ci1-Jenkins
  • Ci2-Hudson
  • Ci3-ThoughtworksGo
  • Ci4-Bamboo

SMOSLT.options module now knows to never evaluate any combination of two or more of these options at the same time. For example, using red bold to indicate selected options, SMOSLT.options would not evaluate the following combination:

    • Ci1-Jenkins

 

  • Ci2-Hudson
  • Ci3-Thoughtworks Go server

 

  • Ci4-Bamboo

Scoring and Inheritance with Or-Groups

Each of these CI server options is more alike, than they are different. Differences between Continuous Integration servers exist, but the big difference is not between them, but between using a CI server and not using a CI server. Again, the list, only this time the name of the java file that does the scoring.

  • Ci1-Jenkins.java
  • Ci2-Hudson.java
  • Ci3-ThoughtworksGo.java
  • Ci4-Bamboo.java

Scoring each of these means writing each of the above java class, and then copying and pasting the common scoring code into each, and changing whatever is unique after copying and pasting.

 

We all know the problems of maintaining copy-pasted code. Not good.

 

So instead, we refactor the above group to add a common super-class. Now the or-group class structure looks like this.

 

  • Ci0-ContinuousIntegration.java

 

  • Ci1-Jenkins.java – extends Ci0-ContinuousIntegration
  • Ci2-Hudson.java – extends Ci0-ContinuousIntegration
  • Ci3-ThoughtworksGo.java – extends Ci0-ContinuousIntegration
  • Ci4-Bamboo.java – extends Ci0-ContinuousIntegration

 

Now we can put the common scoring code in  Ci0-ContinuousIntegration.java

and the other classes will only contain the scoring code that pertains to that unique server.

 

Spreadsheet Reporting

The existence of an or-grouping in a SMOSLT.options run alerts the SMOSLT.analytics module that you care about comparing various Continuous Integration options.

 

So the SMOSLT.analytics module prepares a separate tab in the spreadsheet document, just to compare those options. It names this tab, appropriately, “Continous Integration”

 

Or-Group Score Summarization?

As mentioned above, you have two primary issues when looking at Continuous Integration for your project.

  1. Should I even do Continuous Integration at all, or devote resources to something else?
  2. If I do, which of the many attractive servers should I choose to implement?

 

SMOSLT.analytics will prepare a spreadsheet with potentially many sheets to help you with this and other options. As stated above, it will even prepare a tab within that spreadsheet to help you with number 2 – choosing between or-group options

 

The analytics piece does NOT, however, help you aggregate or summarize Continuous Integration servers for number 1. If you give it 4 or-group options, it will show each individually, making your spreadsheet potentially harder to re read when deciding whether to commit resources to Continuous Integration or some other option. For that reason, you may wish to make a series of separate runs. Try this sequence, for example.

  1. Pick Ci1-Jenkins alone, in your first runs, to compare what happens when you commit resources to Continuous Integration versus committing resources to other options such as Docker Deployments.
  2. Once you’ve decided that Continuous Integration is probably going to be included in your project plan, then make some more runs with each of the rest of the CI 0r-group included. That will let you compare various CI servers and make a final decision

SMOSLT Project/Modules Summary

You probably won’t understand SMOSLT as an app until you first read why SMOSLT is run within an IDE. It just won’t make sense.

SMOSLT.app

This is the main SMOSLT project, where all the important action happens.

 

These actions include

  • running the other modules
  • Taking the assumptions you give it, and coming up with a list of costs based on those assumptions.

SMOSTLT sequence of events within modules:

 

Orchestrated primarily from app module, but also manually from within IDE, given no UI

  • before firing, user has already
    • created a compliant PL file
    • created list of options and other assumptions in assume module
    • created SideEffect code that drives each option.
  • imports PL file
  • imports assumptions from assume module
  • sends each of these to the optaplanner options module
    • options
    • schedule binary
    • analytics
  • optaplanner options module then folows this sequence
    • toggles one option on or off at a time
    • sends that combination of options to stacker
      • which can be on separate threads or machines if required
      • which writes each score and binary to analytics module
      • which also then returns score back to options module
    • runs to some reasonable termination whatever that means
      • brute force if options list small enough
      • more elaborate search process if options list too big
  • user then reviews run in analytics module

 

SMOSLT.domain

Java classes consumed by other modules. Kept separate just for purposes of clarity.

SMOSLT.given

Assumptions are required for SMOSLT to do anything meaningful.

 

This is YOUR area, because you are responsible for all assumptions, even though you might start out with 100% default or partially customized assumptions provided by others. This is like what the cop tells you: “Ignorance is no excuse” the results you get will be no more satisfactory or correct than the assumptions you used to initiate a specific run.

 

Areas that assumptions cover:

  • Unit costs for include.
  • Beliefs about if-then consequences – if I have a java task with no testing, then I will get 30% more unanticipated work fixing bugs. That’s a belief. No one can know what really happens until it does.
  • Story templates. If every task is a story, per agile approaches, then the sum total of all tasks is a narrative arc that makes up a story template. No story template is truly representative of what really happens, but to anticipate costs you have to start somewhere.
  • Actual ProjectLibre files. Ease of use demands that you start with a ProjectLibre template, compliant to SMOSLT specifications.

SMOSLT.stacker

Stacker, or ScheduleStacker, creates a real schedule from a ProjectLibre specification template. Stacker knows nothing about SMOSLT or all the fancy stuff that SMOSLT does, it just stacks up tasks and allocates resources like it is told to do.

 

Stacker is designed to be as simple and fast as possible, because it might get called to re-stack a schedule thousands of different times in a single session.

 

Stacker could theoretically be used by anyone, without any SMOSLT usage. It is not anticipated that anyone would wish to do this, but it is welcomed if desired.

SMOSLT.mprtxprt

SMOSLT does it’s active work in SMOSLT code, and does not interact directly with ProjectLibre APIs. A ProjectLibre schedule is entirely converted into SMOSLT code, and then when done entirely converted back into ProjectLibre file. Ne’r does the twain meet.  This conversion happens in this module.

 

Only the brave should ever crack open this module. ProjectLibre APIs can be quite challenging to the uninitiated, and a giant black hole of time. As of Oct 2014, it is being rewritten ground up anyway.

SMOSLT.main

Someday SMOSLT may be runnable from a real GUI, like you would expect from any decent application.

That day has not yet come. So until then, it is run from a command line. This command line , or CLI code is maintained here.

 

SMOSLT.options

See separate document

 

SMOSLT.analytics

 

 

Is This Your Story? Joe the Architect

Imagine Joe Architect as some guy architecting a large software project/team. Also imagine Thoughtworks as a representation of the latest thinking about how Joe might approach his job with the greatest effectiveness. The Thoughtworks reference is, in this case coincidental, it’s could be any set of disciplines that apply to Joe’s work.

 

The primary constraint is that Joe has a reasonably limited budget, he works for XYZ corp which is properly funded, but still not working in a cost vacuum. Joe has both time and budget constraints that prevent him from going absolutely hog wild. He has to deliver something within some kind of limits. He can’t simply hire Thoughtworks, much as he would like to.

 

Joe is a good student. He reads and absorbs all of Martin Fowler’s stuff and also sat through Neal Ford’s 7.5 hours on Continuous Delivery (Jez Humble yada) at the last NoFluff. This is all still very high level for Joe – even though he is a good student – as he knows that any single slide in the presentations he has watched and absorbed might represent days or even weeks of real life implementations to get it all in place.

 

The problem space is similar to the “Pick Two” sign on the wall of his car mechanic’s shop. “You want it fast, high quality, and cheap? Pick any two” Joe has literally hundreds of options available to his team, from how to refactor his code, to which of dozens of persistence stores to use, to which continuous delivery approaches to implement. He evaluate it all, and he has to direct his team without complete information. Only it isn’t pick two of three, it’s pick 5 of 300 great options.

 

Worst yet, the problem space is NP-Complete (see http://en.wikipedia.org/wiki/NP-complete) – even if you could give the problem to a computer to solve, there would be so many options that the computer could grind for days without producing it’s first decision. Just too many combinations of options to consider using brute force alone.

 

You know what Joe does already. He just holds a wet finger to the air and makes some decisions and his team goes to work.  That is the problem space. Does Joe have any better combination of options than just going with the direction that the wind is blowing?

 

Joe has too many great options.

 

Here’s the real kicker:

It isn’t just that Joe has a problem that he can’t solve effectively.

 

The real loss is that there may be one or more options, ones that he doesn’t know about, that completely transform his job from a moderate success or even partial failure, to a runaway kick ass success! Options that really make a difference. But how would he know?


He might entirely miss the important options. Lost opportunity.

SMOSLT vs Other PPM Software

PPM is a big category

SMOSLT does not attempt to compete with other PPM software. There is plenty to compete with. Use what works best for you.

SMOSLT can be compared with other PPM tools

SMOSLT’s place in the mix is to be fast and relevant to both developers and project managers.

SMOSLT is intended to be ridiculously flexible and adaptable. No good software can do that. That’s why SMOSLT is, happily, such crappy software.

A fantastic list of other PPM tools is compiled here

http://www.prioritysystem.com/tools.html

SMOSLT – Runs in IDE Only

SMOSLT could have been written as great, standalone app. Even now, that could still happen. But it won’t. Why?

SMOSLT is intended as crappy software

My name is Pete Carapetyan. I know how to write great software. SMOSLT is not written as great software.

Instead, SMOSLT is barely held together with chewing gum and bailing wire. Happy Path Software at it’s worst.

This is by design.

SMOSLT is barely usable

… and so it shall remain. To use SMOSLT you probably have to be a java developer, and you need a modest amount of patience.

You probably have to follow the youtube just to figure it out.

You can probably figure out 1000 ways that SMOSLT can and should be improved.

Why not improve it? You could make a great product!

Indeed. There are probably much better ways to spend one’s time. SMOSLT is about making other software great, not about making itself great.

Gotta pick your battles. Plenty of other battles to fight.

Improve your own software, not SMOSLT

SMOSLT is just a means to an end. Don’t improve it. Instead, spend that time improving your own software.

Unless of course you can’t help yourself. If that is the case, send your patches to pete@datafundamentals.com

But my own stuff is being mis-represented! That must be fixed!

Ahah! Different topic of conversation! Your own [insert technology here] is being mis-represented? How can I fix that?

Is there an easier way?

We could probably ship SMOSLT as a fully self contained virtual machine. IDE, projects, etc. Just launch and run.

No, we haven’t done that yet.

 

Impossible to Know What is Possible

The mathematics of creating software has turned upside down.

There is a sweet spot, some mix of consuming work by others, and just simply re-creating it. Finding this sweet spot is getting to be a ridiculous proposition. Not because the options are too few, but because they are too great.

Pick the right set of building blocks, and you can develop anything super quickly:

  • Platforms
  • Tools
  • Languages
  • Practices
  • Sequences

This has become a ridiculously time consuming process. Just knowing what options to pick from can consume an entire career. Picking the right combination of hundreds, even thousands of options? Mathematically it’s absurd to even think about it.

Obviously there are many approaches to solving this problem:

  • The glaze over approach – study options until you glaze over. Then pick what you studied.
  • The social media approach – ask your friends, follow social media.
  • The FoxNews approach – just know what is right. If you disagree, you are wrong. There, done.
  • The career halt approach – just stop being productive until you’ve studied every option
  • The management by following approach – pick the latest offering by big, credible vendors like Oracle, IBM

SMOSLT is an attempt to add one more option to this list. It is a variation on the glaze over approach. It doesn’t solve the problem, it just delays the moment of glaze over, letting you consider options a few more minutes longer before glazing over, and hopefully without halting your software development career in the process.

  • The SMOSLT approach – present combinations of options in terms of palatable math, metrics.

SMOSLT won’t feed your dog automatically, solve world hunger, or get your favorite politician elected. It may help you take a few more deep breaths before charging off and building software with ill considered technology combinations that end up sucking the life force out of your body. If it accomplishes even a part of that modest goal, it’s a big win.