Jonathan's Blog
name

Jonathan's Blog

Mindful Leadership and Technology


Author
Featured

Software Software Development I hate the phrase 'spaghetti code'.

This Code is Fragile on Purpose

Posted on .

This post is a follow up to my earlier posts on so called fragile or spaghetti code where I made a point about reading code - not the most fun part of anyone's day, but it generally makes code a lot less 'fragile' if you do it.

This idea is the equivalent of taking the time to read the instructions - you're a lot less likely to break something if you know how you're supposed to operate it.

This post is still about code fragility, but it is about code that actually is fragile, only it is fragile on purpose.


There are a number of reasons why this can happen, I will illustrate them from some recent work I've been doing, as well as some client projects. But first I am going to focus on what people mean by 'fragile', so it is clear what this means.

You may hear the phrase 'this code is fragile' and think that that means that it doesn't work, or that there are purposes to which it is not suited. As in, 'This vase is fragile so I probably shouldn't fill it with chunks of broken concrete.

But this is really not what 'fragile' code means. Usually it means:

  1. This code is hard to work on and it is likely to break if I do something to change it.
  2. This code may break in the future if the conditions in which it runs change.
  3. This code may break if any of our integration partners alter their API.
  4. This code does not fail gracefully.
  5. This code is not commented well and that makes it hard for me to read and understand what it doing.

As it pertains to number 1, this is just the scenario from the other article where you aren't working hard enough to understand it. Said differently, you are really saying, "My understanding of this code is fragile."

I put 1 and 5 at opposite ends of this list for a reason. Because 5 should not be an obstacle to reading code. It is a better excuse, than 1 for your lack of progress, but it still an excuse. Poorly commented code simply means you have to spend more time reading, testing, debugging the code to understand it. No code is ever commented well enough in my opinion, so you can't let it stop you from working on it. Code can be readable or unreadable, it can be commented or not, but that should not stop you from understanding it if you know what you are doing and that is your job. You can get a different job, but chances are you will be in the same situation there.

5 is only worth noting separately, because you should plan for and avoid that situation. Try to avoid committing this sin if you can.

For both 2 and 3, a significant portion of what will happen in the future is unknowable, and so how much time you spend dealing with 2 and 3 should be carefully callibrated. You can know that a partner is going to publish an API update in 6 months if they tell you, but if you need to go live now you may simply have to deal with that change when they publish it or make it the new API available for testing.

Of course with failures and crashes (number 4) you should avoid these things and use defensive coding practices to avoid exposing your users to failures, even if they originate outside of your systems. A few examples:

  • A good NoScript section of your website.
  • A graceful handling handling of offline operation.
  • Handling nulls, empty strings, and other straightforward data situations.
  • Use exception handling properly to deal with unexpected situations

Etc, etc.

But be careful of swallowing problems within the system - you may protect users and also hide them from yourself, only to have them come back and bite you later on.

It's always surprising to me how many people present an utterly, blank empty page to the user if JavaScript is not enabled or working.


OK, so here we are. We need to do some development work. We have coding practices to deal with number 4 and 5. We're working on a new system, so hopefully number 1 is a non-issue. What, if anything should we do about 2 and 3?

My answer is: it depends.

Truthfully, you can't predict the future or what your integration partner will do to update their API until they publish a specification.

We were recently working through updating a Dialogflow implementation for v2, but we couldn't have changed anything or done anything until they published their v2 specs. It would have been guesswork.

As I've been working on my side project Pricekite.io, I am dealing with the billing APIs for AWS, Azure, and Google Cloud, each of which is in varying stages of development. I am faced with 2 challenges:

  1. What are they going to do in the future to update these APIs?
  2. What additional features am I going to want to add in the future?

My decision for both is to do nothing. If/when I decide to add a feature that requires me to improve code, I'll improve it. I had to add some data storage in order to have data on hand for 1 provider (cough Azure cough) because their billing API is not the most efficient thing in the world.

I had not intended to add data storage until phase 2, but my hand was forced by the 6 seconds it took to pull and process the data. So, I call the 6 second method every 30 minutes and store the values, which can then be pulled at any time. It introduces latency to the data, but that does not matter at this juncture at all. If it turns out to matter, I simply can crank up the frequency of the polling.

So, the code is fragile. But it works, and it was able to be written by 1 person quickly, and it will be very easy to read as, for the most part, you have absolute line of sight readability in the code.


Featured

Last Thoughts on IP v4 Address Pricing

Posted on .

As it relates to Pricekite.io, I've already moved on from this topic, and you won't see anymore blogs about it here. I'm currently deep into navigating the byzantine spaces of the cloud provider product catalogs to pull out serverless computing prices.

But, since I'm pretty sure there will never be a price spike, and since none of the dire predictions ever came true, it is worth considering why.

I have heard at least 3 reasons why we were going to run out of IP Addresses. Here are they are, and here are the reasons they haven't and won't happen:

  1. People - with so many people in the world, and so many of them being online, we will run out of IP addresses because individuals (needing to carve out their digital homesteads) will use them up. Why did it not happen? The rise of online platforms allow people to join the online community without a traditional website and everything that comes with it. Facebook, Twitter, Pintrest, Medium, GitHub, Wix, etc. - you don't need a domain name or an IP address to be online with these services.
  2. Businesses - Businesses, even more than people, need to be online so that people can find them and they can make money. Why did it not happen? Platforms are a part of this. Many business have Facebook pages or Google Sites and that is it, no IP address needed. Also, today you are able to create a web presence outside of a platform and you don't need an IP address to do it. Pricekite for instance has a unique domain and 2 subdomains, but because of cloud based development and neat DNS tricks it does not need an IP address.
  3. IoT Devices - The idea here is that with billions and billions of connected devices we will run out of IP v4 addresses. Why did it not happen? IoT devices mostly don't use public IP addresses, which is as it should be. There's no reason for your refrigerator to be on the public internet (and therefore need a public IP address). It's dangerous enough having it on your home wifi with an internal address. Internal addresses are pretty much infinite, so the IoT devices aren't putting any pressure on the IP address space.

OK, that's it for my thoughts on IP Addresses. Look for more information on compute prices in the near future.

Featured

Live IP Address Prices - pricekite.io

Posted on .

As part of my recent and ongoing interest in the price of IP Addresses, I'm putting together a site to get live reads on the prices from the 3 big cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

The site is now live with live data from GCP and is called pricekite.io. The prices for AWS and Azure are current, but do not refresh live from web services. The code is on GitHub.

A few observations and thoughts from the first part of this development:


Azure Permissions

I am currently stuck on the Azure implementation because of permissions. I know the least about Azure permissions, and they turn out to be quite complicated. I have followed the tutorials, but still it doesn't work. By contrast the GCP permissions are very simple to set up and work very cleanly, while allowing you to control access quite granularly.

This appears (on the surface) to be an arifact of legacy Active Directory. Microsoft needs to support a lot of older models that they have developed over the years for large clients, and can't simply abandon the ways of the past. In Google's case they have only the security and permission model developed for GCP with no legacy headaches to worry about. Such is the price of success.


GCP Pricing Details

Under the covers of GCP's pricing platform there are some interesting things to be aware of.

All of the items (and there are many of them) have the ability to be priced at a level of billionths of a dollar, or other currency.

Items are priced by:

  • Units - whole units of currency. Such as the USD or other currency.
  • Nanos - fractional units of currency equal to 1 billionth of the unit.

Take, for instance, our friend the old, reliable IP v4 Address. In this case, for unused addresses thay are priced at:

0 units and 15,000,000 nanos per hour

This means, that each unused address costs:

$0 + 15,000,000/1,000,000,000 per hour

Or in plain English: $.015/hour or $7.2/month.

Of course the fascinating thing here is the ability to control very fine grained level of pricing. Also, there is evidence that in the future they will be charging per second for some resources. Though I don't believe they do this today.


Next I am going to dive into the AWS implementation and leave my Azure frustrations behind. I'll post another update when I get the AWS pricing working.

Featured

When is a Serverless App not Serverless?

Posted on .

When you have to specify the server operating system:

Featured

IPv4 Address Price - Single Static IP - Aug. 20, 2019

Posted on .

A few years ago I saw an article about IP v4 addresses and how IoT was causing us to run out of them quickly. At the time it seemed odd, because I knew that you could get them for nothing, or next to nothing, from every cloud provider.

If we were really running out, wouldn't you expect to pay something for them? Wouldn't supply and demand dictate that they couldn't be given away for free?

Earlier posts:

http://jonathanfries.net/ipv4-address-price-single-static-ip-feb-20-2017/
https://jonathanfries.net/ip-address-oct62016/
http://jonathanfries.net/ip-address-price-single-static-ip/


But perhaps this is really starting to happen now.

I just received an email from Google Cloud that, perhaps, There's No Such Thing as a Free IP Address Either (TNSTAAFIAE).

From the email:

First, we’re increasing the price for Google Compute Engine (GCE) VMs that use external IP addresses. Beginning January 1, 2020, a standard GCE instance using an external IP address will cost an additional $0.004/hr and a preemptible GCE instance using an external IP address will cost an additional $0.002/hr.

In addition, there are also price increases at Azure, though not in the classic resource deployment mode.


Here are the spot prices today for single static IP addresses (based on a 30 day month):

Aug. 20, 2019

Azure: https://azure.microsoft.com/en-us/pricing/calculator/#
Google: https://cloud.google.com/compute/pricing#ipaddress
Amazon: https://aws.amazon.com/ec2/pricing/on-demand/

Azure - East US

ASM First 5: $0/month
ASM Additional: $2.59/month
Basic ARM First 5: $2.88/month
Basic ARM Additional: $5.76/month
Standard ARM All: $3.60/month

Google

In Use (Standard VM):$2.88/month
In Use (Preemptible VM):$1.44/month
Not in Use:$7.20/month

Amazon - US West N. California

First, In Use: $0/month
Not Running/Additional: $3.60/month


I would say that there is some pretty good evidence that the price of IP addresses is going up.

My second thought is that cloud pricing continues to be very complicated. I think I will write a second blog post about that.

Kudos to Amazon for continuing to keep prices low. Will they be able to do that in the long run?

If this is really supply and demand at work, then expect the price to rise.

Featured

Anxiously Awaiting My AWS Deep Racer

Posted on .

I ordered my AWS Deep Racer about 6 months ago, when it was supposed to be ready in March. At long last, I'm expecting to get it this week.

If you aren't familiar, Deepracer is Amazon's 1/18th scale robotic self-driving (toy) car. It uses reinforcement learning (a variation of machine learning) to teach the car about driving and help it to complete various courses.

Here is what the car hardware looks like:

The hardware was originally supposed to be out earlier this year, but has been somewhat delayed.

In the meantime I've been working with the simulation tool on AWS - AWS Deepracer. So far, I've managed to train a model to complete one of the easier tracks every time.

I'm pretty excited and looking forward to having the actual hardware. I'm probably not ready for the Deepracer League yet but I am having fun with it.

So far, the cost of running the AWS Robomaker simulation jobs (used for training and evaluating models) is not too expensive. I'll have to see what I spend over the coming weeks, it appears to be $10-$15/day for a few training and simulation runs a day, but that is based on a pretty small sample size.