Jonathan's Blog
name

Jonathan's Blog

Mindful Leadership and Technology


Featured

Random Test Harness

Posted on .

I created a basic random testing harness and put the code on GitHub:

https://github.com/jonathan-fries/random_testing

It is very, very basic as the ReadMe file points out. I will most likely do some additional work on it.

As it stands there is no obvious problem with either source of randomness or pseudo-randomness tested.

Both Math.random() and Random.org work fine. That is, when you ask for a bunch of random numbers in a range, they return a result that looks pretty darn random.

This does not explain behavior that I have seen from Math.random().

The next two things that I plan to do are:

  1. Introduce a static, non-variable delay (already in progress).
  2. Develop a basic UI for displaying the results.
  3. Add Crypto.getRandomValues() as another source of randomness

My reasoning for number 1 is that it is clear (so far) with this test harness that using it in a basic way, their is no pattern.

So the experiment would seem to dispel my hypothesis.

It is possible that I was simply seeing patterns where there are no patterns. The human brain is good at that.

But I really did see the same number (selected out of about 1200) come up 3 to 4 times within a set of 20 numbers. And this happened a lot.

So, perhaps it has to do with the way I was calling it. And if I was creating a pattern through manual testing, perhaps a standard delay could reproduce a similar situation. So, I will introduce a delay and see if that produces different results.

If nothing else, I can simply lay to rest my (perhaps human-induced) pattern recognition and sleep easier knowing that pseudo-random is really pretty random after all.

I don't really expect to see any pattern with Random.org. They were just a control and I had recently written code to integrate with them, so I included them for comparison.

Featured

Struggles with Math.random()

Posted on .

I've been using Math.random() in JavaScript (both in the browser and on the server) to generate random numbers for a couple of years and, for reasons I can't explain, I get a lot of duplicates.

Yes, I know, it's random. So you will get duplicates. But I was getting duplicates on the order of every 10th number, when selecting from a range of over 1000 possibilities. And this happened every time I looked at a sequence that long.

I Googled it exhaustively and found nothing meaningful.

It is true that Math.random() is not really random of course. It is pseudo-random - computers such as you and I have on our desks, or such as you can get provisioned (easily) in cloud data centers, cannot generate real random numbers.

But when you look at the articles on these pseoudo-random number generators (PRNGs) you see that it should not happen as often as it was happening to me.

Here are some useful articles on how they work:

https://hackernoon.com/how-does-javascripts-math-random-generate-random-numbers-ef0de6a20131

https://v8.dev/blog/math-random

They're a little techy, but so is this topic.

Today browsers offer Crypto.getRandomValues(), which is a good deal more secure, but I simply decided that I wasn't leaving this to pseudo-random chance, I went looking for where I could get real random numbers on the internet and I found two places.

The first I already knew about, random.org. It has been around for a good long while, and it has an API.

The second I found was a consortium, seemingly led by Cloudflare. I know Cloudflare as I use them on another site for DNS security, but I was unaware of their work around cryptography.

Here is a link to some information about Cloudflare's project 'League of Entropy' and also another cool project they have for generating their own (internal) random numbers from lava lamps:

League of Entropy
Lava Lamps

Using Cloudflare's solution was more complicated than what I wanted and I was not sure exactly how to adapt it for my needs, so I went with Random.org instead.

Random.org's API was very easy to use and their testing tool was very helpful. Here is a link to their request builder:

https://api.random.org/json-rpc/2/request-builder

Random.org uses atmospheric radio noise to generate real random numbers. That is, they have devices (in multiple countries) tuned between commercial radio frequencies and the background noise from these radios is used to generate a random signal that produces random numbers. Neat! Here is a link to a faq question about it that is very interesting:

https://www.random.org/faq/#Q1.4

The only thing that I did not like about it, is that I needed to round-trip to random.org to get a random number, and I had to figure out how to secure the license key. I did not want a lot of latency and I did not want to publish my license key to every browser in the world. Here is what my solution looked like:

  1. In browser, generate a psuedo-random number on page load. This is a fallback if other things break.
  2. In browser, make a background request to the website back-end for a real random number.
  3. On the server, if I have a real random number available, send it to the browser.
  4. On the server, if I don't have a real random number available, make a request for a buffer of random numbers from random.org. Generate a pseudo-random number and send it to the browser to avoid any additional latency.
  5. On the server, once I receive a buffer of random numbers store it.
  6. In the browser, make use of the random number.

I was a lot more worried about introducing latency than (and failures) than I was about whether or not I had a truly random number every single time, since this isn't exactly a high security system.

Nonetheless, I wanted a random number so I implemented the code above to give myself true randomness most of the time, with a fallback to pseudo-randomness to avoid excessive latency or failures. It was about 3 hours to get it all working and tested for edge cases such as "What happens if the web service fails?" or "What happens if my buffer counter for random numbers ends up in the negative for some reason?"

I still want to get to the bottom of why I was seeing so many duplicate numbers from Math.random(). I have to believe it was something to do with my code and not with Math.random() itself.

I am going to branch that effort into a different project to see if I can reproduce the behavior and get to the bottom of it.

Featured

Now Reading: The Happiness Advantage

Posted on .

I'm currently re-reading The Happiness Advantage by Shawn Achor.

It's a great book for thinking about happiness first, not as an outcome that we get later on after we're successful or rich or whatever.

In fact, according to the book, you stand a better chance of reaching those goals if you focus on being happiness first.

Happiness is a precursor to success, not the other way around.

The author has identified 7 principles that he believes will help you to be more happy. Here is a quick summary:

  1. The Happiness Effect - As mentioned above, happiness drives success, and not the other way around. But why is that true? It's true because happy people have greater ability to think creatively and process options. This is an evolutionary adaptation - as stress narrows are ability to fight, flight, or freeze - happiness opens us to the great variety of options that are available. In addition to discussing the theoretical underpinnings, this chapter offers a number of ways to give yourself the happiness advantage. These are tips and tricks to help lift your mood and make you more open to possibilities and success, as well as providing an antidote to stress. There are also tips for leaders on how to infuse your workplace with happiness.
  2. The Fulcrum and the Lever - By changing how we view the world, we change how we react to it. By having a more positive outlook, we will react with greater positivity, freeing us from negative reactions. By doing this we can have a much greater impact on the world around us. At first blush, this feels like a restatement or explanation of number 1, but it is ultimately more than that. While tips and tricks to add happiness help, and being happier can help us be more successful, point 2 is deeper. This is about a positive mindset as a fundamental alteration of ourselves, creating even greater possibilities. The author says it best:

Simply put, by changing the fulcrum of our mindset and lengthing the lever of possibility, we change is what is possible. It's not the weight of the world that determines what we can accomplish. It is our fulcrum and our lever.

  1. The Tetris Effect - The more time that we spend engaged in an activity the more our brain becomes wired to perform that activity. Tax accountants spend their days searching for errors in tax forms. As a result they become wired to search for errors in everything they do. Conversely, the more time we spend engaged in scanning for the positive, we gain access to three very important tools: happiness, gratitude, and optimism. Happiness we've already discussed. Gratitude is a reproducer of happiness in the now: the more we see things to be grateful for, the more grateful we become, the more we see things to be grateful for, etc. Optimism does the same for the future: the more we focus on happiness, the more we expect that trend to continue in the future.
  2. Falling Up - When we fail or suffer setbacks, falling down or staying in place are not the only option. For many people, failures and setbacks (and even trauma) can produce changes that allow you to not simply stay where you are, but take even bigger steps forward. This is the idea of falling up. Become better because of the setbacks in your life. Use your failures and losses, as learning experiences. See where those silver linings can take you. The author provides several techniques for how to think about these situations and gather positive momentum from them.
  3. The Zorro Circle - limiting yourself to small, narrow goals helps you stay in control and expand your ability to stay focused and not given to negative thoughts and helplessness. You may be familiar with this (if you have ever read The Seven Habits of Highly Effective People by Steven Covey) as Circle of Influence and Circle of Concern. It was a great principle then, and it still works now. This book provides some interesting additional scientific information about how this works in our brains and bodies, and it provides insightful tactics to to build up our circles of influence (or control) to make us more resilient.
  4. The 20 Second Rule - by reducing small barriers to change, we make it easier to develop new, healthy habits. It can be difficult to make changes to improve our health or well-being. Willpower alone is quite fallible and gets worn down the more we are asked to use it. By the end of a long day it can be difficult to work up the necessary grit to go to the gym. We can make this easier on ourselves by simply removing the small obstacles that unnecessarily sap our self-control energy - remove bookmarks to distracting sites, keep your boots at the ready so you still go outside in the winter, put that book you're meaning to read on the coffee table where it is easy to get you, or hide the remote control so that it is hard to turn on the TV.
  5. Social Investment - Building and maintaining social connections has important ramifications for our ability to handle stress and face challenging situations. When we have strong social connections we are more resilient and less likely to think of situations as stressful in the first place. Even brief encounters can be benefiical - a short encounter can still be high quality, resetting our respiratory system and reducing levels of cortisol (a stress hormone) in our bodies. Rewards for positive social interactions are very much wired into our brains. So here is one more reason (if you needed one) to maintain your social connections and build up your work and personal networks.

The last part of the book focuses on the ways that using the 7 principles can help us to spread the benefits at work, at home, and everywhere else in the world.

In addition to the overview information I've shared, the book has lots of detail on how things work as well as how to put it into action.

I'm personally a lot more interested in putting things in action, and don't usually have to be sold on the fact that it works, but it is there if you need it.

The individual steps and processes are easily worth the price of the book. After all, if even one of these makes a difference for you - whether in your personal happiness or your career satisfaction - what was that worth?

Featured

Cloud Computing Cloud Price Comparison

How Much are you Charging me Right Now?

Posted on .

Whenever I go and sign up for any particular cloud provider, or add a new service, I always have a moment of panic that sounds like this in my head:

"How much will this cost? How much is this already costing me, RIGHT NOW at this VERY MINUTE?"

Because the answer, if you aren't careful, could be a lot of money.

So, I am improving my skills with all of the cost dashboards, and I have even provisioned an AWS Organization in such a way as to track the cost of my AWS DeepRacer contest participants (not as simple as I thought it would be).

But it is difficult to understand what is turned on at any given second and what is being charged for.

It would be nice to have a Panic Report that would show you everything that was operational and let you see what the per hour or per second charge for that thing was.

Perhaps this runs counter to the notion of "don't make it hard for the customer to give you money" or "encourage them to sign up for everything and don't show the cost until the bill arrives", but I have to believe that these companies are not that shortsighted.


Side note: I am signing up for IBM Cloud and Oracle Cloud, in the interest of seeing if I can pull in the same information to pricekite from those folks that I did for the Google Cloud, Microsoft Azure, and AWS.

So far, I think the answer is 'No', but we'll see.

With Oracle, I am unable to even sign in at this juncture. Their sign in process is very much 'not like the others' and seems hyper-focused on enterprise.

I guess that makes sense for them, but it is frustrating. I guess I won't accidentally be giving them any money.

Featured

Cloud Price Comparison Cloud Cloud Computing Multi-Cloud Pricing

Pricekite.io Beta

Posted on .

The new version of Pricekite is up, which I am labeling 'beta'. It allows for interactive comparison of serverless compute pricing across Google Cloud, Amazon Web Services, and Microsoft Azure. It is located here. And the code is also on Github.

I'm pretty happy with the current results. You can repeat the results of the earlier blog, exactly. Meaning that my math was right. Though the fact that they Azure SKUs count things in 10s instead of 1s through me for a loop momentarily.

Here is the core research tools that were used to do this, along with the cloud providers private APIs:

The Google SKU explorer is very helpful, I wish all the providers would add such a feature.

You can also complete the extended analysis I mentioned in my blog notes. In order to run those scenarios you increase the number of functions, transactions, or any other parameter and see how the effect of discounts dissipates, with larger volumes.

My original scenario was:

2 Functions
12,960,000 Executions/Month
512 MB Function Memory
200 ms/execution

In this scenario (which is substantive) AWS is the clear winner, because of their discounts. However, when you increase the functions to 32, Azure becomes the less expensive option because of their lower (by a little bit) base pricing.

Featured

Cloud Price Comparison Cloud Cloud Computing

Cloud Pricing Thoughts

Posted on .

For my last post comparing serverless offerings on a number of angles, I was struck again by the complexity of cloud pricing, which I alluded to at the end of the article.

Of course there is the great convenience of the cloud and all that it does for us, but there is also the substantial complexity of all that is offered, and the way it is priced.

This is a good article to read if you are thinking about this stuff:

https://www.techrepublic.com/article/aws-billing-is-broken-and-kubernetes-wont-last-says-irreverent-economist-corey-quinn/

My biggest beef is that I have to do actual research to understand the skus and how they work, in order to do price comparison. The APIs themselves do not (in any way shape or form) describe the service offering.

So: I Google, I read, and I figure it out. At least Google now offers this:

https://cloud.google.com/skus/?currency=USD

Which is a very helpful research tool, for their own cloud offering.

The other providers should do something like it. And they should give you programmatic access.

These systems all lack some definitive way to identify the pertinent skus. As a result, I'm hardcoding things (for now) and thinking about how this could be done better in the future.

Even if the values are stored in a database or configuration, I will need to add new ones, stay up to date, etc.

For real dynamic pricing, it would be better if there was an API that could be called to retrive relevant skus for any given service. Something like this, "Dear AWS, please give me all skus relevant to pricing of Lambda functions in East US 1". This would SHOULD be a simple API to use, and they have all the information to build it but they don't think about things that way.

The Google tool sort of does this, but you get back much more information and you still have to sort through skus to figure it out.

Anyway, billing APIs are not sexy nor are they money makers for these guys, but meaningful automation that will matter to the CFO will depend on services like this.

If I have to maintain a list of skus, then my pricing engine will always be a bit brittle. Fine for research, not great for prod.