Jonathan's Blog
name

Jonathan's Blog

Mindful Leadership and Technology


Author
Featured

Positivity Bias Positive Behavioral Metrics Management

Positive Behavioral Metrics - How does it work?

Posted on .

In my last post I jumped right into the fray of describing a whole system for data-driven software development that includes normal stuff (Business KPIs, OKRs, Software metrics) but also some weird stuff - what I call Positive Behavioral Metrics. I didn't describe what I meant, so I am going to do that now.

I also didn't describe what to do with the other half of that coin - negative feelings and behavior - but I am going to do that separately, in a future post.

Positive Behavioral Metrics is keeping track and counting all the good stuff that people do. A system to do this (could be software-based or not) should have the following characteristics:

  1. A means to create and track 'wins' or instances of people doing good stuff. This could be someone who gave a great presentation, won significant new business, or contributed to the positive roll-out of a new initiative. Really anything that is good (for individuals, teams, or the organization in general) - large or small does not matter. Concrete behavioral information is preferred - a specific good thing is better than a vague good thing. But honestly, even starting with vague is better than nothing.
  2. Notification - the person should always be notified when someone recognizes them.
  3. Notification of Others - the person's boss and their boss's boss (if applicable) should be notified about the good thing that was done.
  4. Reports - a person should be able to go in and see all the good things that they did in a time period (last month, this year, a certain year) so that it can be used in reviews.
  5. Track positive behavior to company values: any time that someone does something good and it is tracked, they person entering should have the ability to track this to some part of the company's values.
  6. Provide a report on people who consistently recognize other people. These are your energy fountains - the people producing energy in your organization. This is another key number, in addition to the people who get recognized the most.
  7. Organization-level Reports: should include reports on doers - those doing the good stuff and recognizers those doing the recognizing. For both you want to know how much of it is happening, where it is happening, and to be able to drill down and see who is doing it and who they work for.

This is really pretty straightforward - honestly this can be built for very little $$ and you could get away with 1,2, and 3 to get started. You don't need more than that.


Outside of the system, you need a program around the raw numbers and reports to really drive maximum effect. And I don't mean prizes or gift cards. I honestly think that tying it to rewards or awards isn't terribly interesting or effective.

I'm really talking about how this applies to people's growth, to their careers, and to their happiness. This is what will make it the most powerful. This should generally include:

  1. Personal recognition - by which I mean, if you're in charge and you get the emails about people doing awesome stuff, then walk around or visit the people and talk to them and show them that you are aware of what they do. You can take them to lunch if you want, or have a dinner or something, but do some form of recognition where their peers see you talking to them about what they did.
  2. Thank you notes - I'm personally terrible at this, but many people swear by it. Sending handwritten thank you notes to people (or their spouses) is something that can be very effective.
  3. Use of reports - encourage people to review their data, especially at review time to be sure that they use this data when filling in their reviews or 360 evaluations.
  4. Encourage managers to recognize - use the recognizer score to ensure that recognition is happening and that you have even consistent coverage in your organization.

There are many other things you can do. These are simple examplesthat you can start with that don't cost much but do build a lot of energy in your organization.

Next up: what to do with the negative stuff.

Featured

Pillars of Data-Driven Software Engineering

Posted on .

I've spent quite a while in companies where metrics and KPIs are important: they are valuable tools, when used correctly. Everyone is more empowered when they see the business succeeding and see their own role in making it happen.

But I chose the image for the header of this post intentionally - it shows people working together, not a bunch of numbers or a pie chart. This is because people must engage with the data, and ideally, they should be the ones requesting more data to do their jobs better. This only happens if you take them into account at the beginning of any such program and figure out what makes them want to succeed, and what they need to do their jobs better.

When it comes to KPIs and metrics, I've seen problems crop up with them that usually fall into 3 big categories:

  1. The creation and roll-out is ham-handed and focuses on management without listening to people, understanding what they need or want, and addressing their fears. People will be afraid - engage the fear or risk the disengagement that will follow.
  2. What happens if a business, department, or team has an off year/quarter/month? How can you keep team members engaged? How do you keep your KPIs or data-based management from feeling like a flogging?
  3. How do you balance the contributions of teams and individuals? How do you strike a balance between team metrics that measure (and reward) team success vs. those that recognize individual contributions? Both are important.

Because I've been getting exposure to OKRs through some client work recently, I think it is worth looking into how a successful metrics/OKR system can be constructed and avoid the problems up above.

I have created this diagram to show how the elements should work together:



Here is a quick overview of the 5 major components:

  1. Company Vision is critical. It really comes from outside and is separate from the pillars, but it influences them strongly and should determine (especially in the case of KPIs and OKRs) what they are. Company vision is the 'Why' of the company and you should see this strongly reflected in KPIs and OKRs.
  2. KPIs (Key Performance Indicators) measure the ongoing business performance of the organization, including its profitability and how it achieves its vision. If your KPIs miss one of these marks (I've seen it) then your employees will be disconnected from it.
  3. OKRs (Objectives and Key Results) - these are measurable goals that are more transitory than KPIs. OKRs should be a measure of what you are doing NOW (this month, this quarter, this year) in order to achieve and improve results. OKRs done well should result in improved KPIs - at their best they show us that we have selected the right work and done it well.
  4. Software Metrics - this has proved elusive in organizations that I have worked for. What is the measure of good software development? What is the measure of a great developer? We're currently starting to use GitPrime, internally and with some clients. Good engineering metrics should result in agreed upon set of standards for software engineers, a high bar for quality of work, and they should drive us to deliver more and better features, so that we can do more valuable work.
  5. Positive Behavioral Metrics - I will spend some time outside the bulleted list on why this is important and how this works. But if you're skimming this then you only need to know that the orange arrows tell you why this is important. When you have a down quarter, when a team struggles, what gives people energy and lifts them up so they can deliver anyway? What makes them carry on? What makes them feel that it is worth it to do so? This is driven by positive behavioral metrics.

Don't buy it? Wishing for comprehensive behavioral metrics? Let me explain why positive is so key here:

  1. You never have to look far to hear negative behavioral information. It's there everyone knows about it, the focus here is to gather up the positive stuff to act as a counter balance.
  2. Your other pillars - KPIs, OKRs, Software Metrics leave plenty of room for the realistic and pessimistic. If there are issues, they will show up there.
  3. If you have a real, true behavioral problem on your team somewhere, you know it. You need to deal with it. Waiting around for a mid-year or annual review is wrong. The lack of presence of some type of negative ledger should create MORE urgency for you as a manager - not less. Deal with your drains, negatrons, and nasty actors - do it now.
  4. HR already has systems and paperwork for negative behavioral problems. You don't really need another one.
  5. An empty ledger says a lot - if someone has no items in their positive behavioral tracking system. Essentially the system for sending out positive vibes, you can assume that they are not a force for inspiration and energy on a team.
  6. Link positive behavior to your company vision and guiding principles whenever possible. The arrows in the diagram show everything flowing out from your company vision, and this is how it should be. Your vision and mission (or whatever you call it) should be your barometer and benchmark for what you do. But, when people succeed, if you link this back to parts of your mission or principles, this can help to reinforce why what they did was important, and later on you can look at where and how your principles are being re-inforced, and see if there are any gaps worth thinking about.

Challenges with 'Comprehensive' Behavioral Metrics

In one of the organizations that was very metrics focused, we had a metrics section for behavioral traits. This was not exclusively about positive behaviors.

The rating's were from 1 (not at all cool in any way) to 5 (godlike).

There were a lot of 3's and 4's handed out by managers. Which was correct and how the scale was designed, but it lead, almost exclusively to problems.

Problem 1: Most people view themselves as 4s or 5s. As a result, the reactions that people had to pretty realistic, generally positive reviews (say a 3.5) was one of disappointment, "You mean I'm not a 5?" When the system was set up so that almost no one could ever get a 5.

Did anyone ever move up from being a 3 to a 5? A few, but it didn't have anything to do with the review process or the behavioral traits metrics. It had to do with great managers and mentors making a real difference in people's careers, so that they were uplifted and excited to come to work. Getting a 3 (out of 5) on a performance review never does that.

Problem 2: By the time you were handing out a 2 or a 1, it was too late and you should have fired that person already. I can think of only one exception to that, and it was a 2.5, not a 1.

This is why I have a pretty dim view of behavioral metrics, and see that focusing on the positive is the most valuable way to come at this.


The diagram up above is how I see this all working together. I am currently working with Weekdone for OKRs and I like how you can essentially set and use this at the granularity you prefer. You can have team OKRs and you can have individual OKRs, and they can roll up or not.

I think this is an excellent pivot point between the team/individual and it allows you to customize this to your needs.

For software development I think a lot of your engineering metrics are going to be on an individual level (# of commits, accepting pull requests, etc).

This allows you to keep KPIs at the global or divisional level, where I think it is good to have a strong focus on the team and the sum of the parts working together.


How To Get It All Done

"Sure," you say, "that is a nice diagram Jonathan, but who has time for all of this stuff?"

Or perhaps you say something less flattering.

This is a fair criticism, our time and our energy are our most precious commodities as leaders, so a bunch of extra paper pushing does not help anyone.

Here is a cost effective way to approach these, get started, and grow it as you see which is most valuable for you.

  1. KPIs - Most KPIs should be an accounting function, and those that are not either should be, or they should be baked into the software product you develop as a key management function, end of story. Beware of special cases and subdivisions that make reporting on KPIs more work. Everyone should get the same measurement, that is what makes it a KPI. Large organizations with true business divisions can subdivide. If that is you, great. If it isn't you, don't spend your time one it.
  2. OKRs - Experiment and scale as necessary. I mentioned using Weekdone, and I think it is a fine tool with a nice free-to-use option for getting started. You could also create some basic OKRs for teams for a quarter and use a spreadsheet to track them. That would be a fine way to start. Just be sure you're getting the spreadsheet out periodically and discussing it.
  3. Engineering Metrics - Here you need a tool. There are a number of these now, and I do not have experience with all of them, but engaging your development team in understanding their work better is the key here - fine a tool that helps you engage them in understanding their work and how to get more done. What are the roadblocks? What works well? What is challenging? To the extent that it is possible: get the teams themselves to engage with and suggest ways to use these numbers. Homegrown metrics that are seen by the team as serving them or their fellow devs is the maxim you should go with. Make suggestions, sure. But the more this can be owned by the people building stuff, the better off you'll be. Here are few tools that are available in this space, I do not know all of them well:
  1. Positive Behavioral Metrics - I think a tool is helpful here, but not 100% necessary. A tool helps you to spread the positive energy to more people because it can do automatic emails, texts, or messages. Those messages give you a boost, show your employees and bosses that you spread that energy, and gives all of them a boost too. Just sending an email to 1 person and tracking the info in a spreadsheet doesn't do those other things. I've had to build this tool myself at several companies and I have basic source code to do it. I am currently developing an open source solution, and when I get it done I will share. There are tools for employee recognition out there, I do not know how well any of them solve the core problem of multi-person recognition, up-chain recognition, general positive energy flow in organizations, and principle-based recognition. Those are the core tenets of what my solution does (and has always done).

In the absence of a great tool, you simply have to find ways to spread as much positive energy as possible and make sure you are up-reporting (sending your kudos to the person and that person's boss or boss's boss or both). Not the world's worst assignment. As one CEO that I worked for said to me, "One of the best parts of my job is going around recognizing people for all the great things they do." So, sending her emails about my team's successes was not hard for me to do. It made her feel good, it made my employees feel good, and it made me feel good. That's some positive energy.

If your method doesn't immediately enable superb reporting, try to keep a record somewhere - you can even use email history.

Featured

Cloud Multi-Cloud Pricing

Cloud Pricing - A DevOps Feature Suggestion

Posted on .

I wrote about this in an earlier post.

Google will begin charging for in-use, public IP Addresses on January 1, 2020. Is this the much ballyhooed supply crunch for IP v4 addresses in it's early stages? Probably not.

My guess is that it is simply a reflection of the fact that Google sees this as a chance to recognize some revenue from a product/service that it has been giving away previously. And no one is going to switch cloud providers because of a few bucks they get charged for an in-use IP Address.

So, why write a blog about it?

Because it is December 18th and they still haven't published a SKU for it, which makes it hard for me to update my IP Address pricing tool on pricekite.io. This is mildly frustrating since I have some personal time this week in advance of the holidays and I was hoping to make this change.

For now, my information on pricekite remains accurate, in that I published the prices when Google published them. However, I would like the dynamic piece of this to work correctly. Right now it is simply hard-coded in the UI - not the most elegant solution.

In any case, this speaks to the generally opaque way that pricing and product SKUs work in the cloud world. SKUs are published, but the exact way in which they work, interact, and are used to charge your bill is sometimes hard to understand. Which SKUs apply? How are the discounts accounted for? Is it one SKU or many? how exactly will I be charged for what I am creating.

Each cloud provider has its own approach to this and actually Google has a decent tool here. But it still isn't perfect or always clear how it will work.

It would be great if the cloud providers would create a cloud pricing tool (similar to the way the Lambda permissions tool works) that showed you exactly which SKUs you were using, how discounts are likely to apply, and forecated cost with the ability to tweak scenarios.

Here is how the Lambda resource visualization works:

AWS_Lambda_Designer_smaller

To go a step futher, I'd like to see suggestions (based on DB structure or code being written) for how to reduce cost or make smarter choices. I believe this would be achievable, although might be difficult to do.

This visualization would show all the SKUs that would be used based on the current configuration and code of a particular service, and it would have some inputs to allow you to tweak scenarios and understand the cost of your choices.

This would allow developers to truly embody the DevOps mindset we all are striving for. How does my code choice impact the cost of this solution? As currently constructed, no cloud provider offers this type of functionality.

Featured

Random Test Harness

Posted on .

I created a basic random testing harness and put the code on GitHub:

https://github.com/jonathan-fries/random_testing

It is very, very basic as the ReadMe file points out. I will most likely do some additional work on it.

As it stands there is no obvious problem with either source of randomness or pseudo-randomness tested.

Both Math.random() and Random.org work fine. That is, when you ask for a bunch of random numbers in a range, they return a result that looks pretty darn random.

This does not explain behavior that I have seen from Math.random().

The next two things that I plan to do are:

  1. Introduce a static, non-variable delay (already in progress).
  2. Develop a basic UI for displaying the results.
  3. Add Crypto.getRandomValues() as another source of randomness

My reasoning for number 1 is that it is clear (so far) with this test harness that using it in a basic way, their is no pattern.

So the experiment would seem to dispel my hypothesis.

It is possible that I was simply seeing patterns where there are no patterns. The human brain is good at that.

But I really did see the same number (selected out of about 1200) come up 3 to 4 times within a set of 20 numbers. And this happened a lot.

So, perhaps it has to do with the way I was calling it. And if I was creating a pattern through manual testing, perhaps a standard delay could reproduce a similar situation. So, I will introduce a delay and see if that produces different results.

If nothing else, I can simply lay to rest my (perhaps human-induced) pattern recognition and sleep easier knowing that pseudo-random is really pretty random after all.

I don't really expect to see any pattern with Random.org. They were just a control and I had recently written code to integrate with them, so I included them for comparison.

Featured

Struggles with Math.random()

Posted on .

I've been using Math.random() in JavaScript (both in the browser and on the server) to generate random numbers for a couple of years and, for reasons I can't explain, I get a lot of duplicates.

Yes, I know, it's random. So you will get duplicates. But I was getting duplicates on the order of every 10th number, when selecting from a range of over 1000 possibilities. And this happened every time I looked at a sequence that long.

I Googled it exhaustively and found nothing meaningful.

It is true that Math.random() is not really random of course. It is pseudo-random - computers such as you and I have on our desks, or such as you can get provisioned (easily) in cloud data centers, cannot generate real random numbers.

But when you look at the articles on these pseoudo-random number generators (PRNGs) you see that it should not happen as often as it was happening to me.

Here are some useful articles on how they work:

https://hackernoon.com/how-does-javascripts-math-random-generate-random-numbers-ef0de6a20131

https://v8.dev/blog/math-random

They're a little techy, but so is this topic.

Today browsers offer Crypto.getRandomValues(), which is a good deal more secure, but I simply decided that I wasn't leaving this to pseudo-random chance, I went looking for where I could get real random numbers on the internet and I found two places.

The first I already knew about, random.org. It has been around for a good long while, and it has an API.

The second I found was a consortium, seemingly led by Cloudflare. I know Cloudflare as I use them on another site for DNS security, but I was unaware of their work around cryptography.

Here is a link to some information about Cloudflare's project 'League of Entropy' and also another cool project they have for generating their own (internal) random numbers from lava lamps:

League of Entropy
Lava Lamps

Using Cloudflare's solution was more complicated than what I wanted and I was not sure exactly how to adapt it for my needs, so I went with Random.org instead.

Random.org's API was very easy to use and their testing tool was very helpful. Here is a link to their request builder:

https://api.random.org/json-rpc/2/request-builder

Random.org uses atmospheric radio noise to generate real random numbers. That is, they have devices (in multiple countries) tuned between commercial radio frequencies and the background noise from these radios is used to generate a random signal that produces random numbers. Neat! Here is a link to a faq question about it that is very interesting:

https://www.random.org/faq/#Q1.4

The only thing that I did not like about it, is that I needed to round-trip to random.org to get a random number, and I had to figure out how to secure the license key. I did not want a lot of latency and I did not want to publish my license key to every browser in the world. Here is what my solution looked like:

  1. In browser, generate a psuedo-random number on page load. This is a fallback if other things break.
  2. In browser, make a background request to the website back-end for a real random number.
  3. On the server, if I have a real random number available, send it to the browser.
  4. On the server, if I don't have a real random number available, make a request for a buffer of random numbers from random.org. Generate a pseudo-random number and send it to the browser to avoid any additional latency.
  5. On the server, once I receive a buffer of random numbers store it.
  6. In the browser, make use of the random number.

I was a lot more worried about introducing latency than (and failures) than I was about whether or not I had a truly random number every single time, since this isn't exactly a high security system.

Nonetheless, I wanted a random number so I implemented the code above to give myself true randomness most of the time, with a fallback to pseudo-randomness to avoid excessive latency or failures. It was about 3 hours to get it all working and tested for edge cases such as "What happens if the web service fails?" or "What happens if my buffer counter for random numbers ends up in the negative for some reason?"

I still want to get to the bottom of why I was seeing so many duplicate numbers from Math.random(). I have to believe it was something to do with my code and not with Math.random() itself.

I am going to branch that effort into a different project to see if I can reproduce the behavior and get to the bottom of it.

Featured

Now Reading: The Happiness Advantage

Posted on .

I'm currently re-reading The Happiness Advantage by Shawn Achor.

It's a great book for thinking about happiness first, not as an outcome that we get later on after we're successful or rich or whatever.

In fact, according to the book, you stand a better chance of reaching those goals if you focus on being happiness first.

Happiness is a precursor to success, not the other way around.

The author has identified 7 principles that he believes will help you to be more happy. Here is a quick summary:

  1. The Happiness Effect - As mentioned above, happiness drives success, and not the other way around. But why is that true? It's true because happy people have greater ability to think creatively and process options. This is an evolutionary adaptation - as stress narrows are ability to fight, flight, or freeze - happiness opens us to the great variety of options that are available. In addition to discussing the theoretical underpinnings, this chapter offers a number of ways to give yourself the happiness advantage. These are tips and tricks to help lift your mood and make you more open to possibilities and success, as well as providing an antidote to stress. There are also tips for leaders on how to infuse your workplace with happiness.
  2. The Fulcrum and the Lever - By changing how we view the world, we change how we react to it. By having a more positive outlook, we will react with greater positivity, freeing us from negative reactions. By doing this we can have a much greater impact on the world around us. At first blush, this feels like a restatement or explanation of number 1, but it is ultimately more than that. While tips and tricks to add happiness help, and being happier can help us be more successful, point 2 is deeper. This is about a positive mindset as a fundamental alteration of ourselves, creating even greater possibilities. The author says it best:

Simply put, by changing the fulcrum of our mindset and lengthing the lever of possibility, we change is what is possible. It's not the weight of the world that determines what we can accomplish. It is our fulcrum and our lever.

  1. The Tetris Effect - The more time that we spend engaged in an activity the more our brain becomes wired to perform that activity. Tax accountants spend their days searching for errors in tax forms. As a result they become wired to search for errors in everything they do. Conversely, the more time we spend engaged in scanning for the positive, we gain access to three very important tools: happiness, gratitude, and optimism. Happiness we've already discussed. Gratitude is a reproducer of happiness in the now: the more we see things to be grateful for, the more grateful we become, the more we see things to be grateful for, etc. Optimism does the same for the future: the more we focus on happiness, the more we expect that trend to continue in the future.
  2. Falling Up - When we fail or suffer setbacks, falling down or staying in place are not the only option. For many people, failures and setbacks (and even trauma) can produce changes that allow you to not simply stay where you are, but take even bigger steps forward. This is the idea of falling up. Become better because of the setbacks in your life. Use your failures and losses, as learning experiences. See where those silver linings can take you. The author provides several techniques for how to think about these situations and gather positive momentum from them.
  3. The Zorro Circle - limiting yourself to small, narrow goals helps you stay in control and expand your ability to stay focused and not given to negative thoughts and helplessness. You may be familiar with this (if you have ever read The Seven Habits of Highly Effective People by Steven Covey) as Circle of Influence and Circle of Concern. It was a great principle then, and it still works now. This book provides some interesting additional scientific information about how this works in our brains and bodies, and it provides insightful tactics to to build up our circles of influence (or control) to make us more resilient.
  4. The 20 Second Rule - by reducing small barriers to change, we make it easier to develop new, healthy habits. It can be difficult to make changes to improve our health or well-being. Willpower alone is quite fallible and gets worn down the more we are asked to use it. By the end of a long day it can be difficult to work up the necessary grit to go to the gym. We can make this easier on ourselves by simply removing the small obstacles that unnecessarily sap our self-control energy - remove bookmarks to distracting sites, keep your boots at the ready so you still go outside in the winter, put that book you're meaning to read on the coffee table where it is easy to get you, or hide the remote control so that it is hard to turn on the TV.
  5. Social Investment - Building and maintaining social connections has important ramifications for our ability to handle stress and face challenging situations. When we have strong social connections we are more resilient and less likely to think of situations as stressful in the first place. Even brief encounters can be benefiical - a short encounter can still be high quality, resetting our respiratory system and reducing levels of cortisol (a stress hormone) in our bodies. Rewards for positive social interactions are very much wired into our brains. So here is one more reason (if you needed one) to maintain your social connections and build up your work and personal networks.

The last part of the book focuses on the ways that using the 7 principles can help us to spread the benefits at work, at home, and everywhere else in the world.

In addition to the overview information I've shared, the book has lots of detail on how things work as well as how to put it into action.

I'm personally a lot more interested in putting things in action, and don't usually have to be sold on the fact that it works, but it is there if you need it.

The individual steps and processes are easily worth the price of the book. After all, if even one of these makes a difference for you - whether in your personal happiness or your career satisfaction - what was that worth?