Skip to main content

Posts

CodeReady Container address range

I've been working with Red Hat's CodeReady Containers and I recently had a networking issue that might not be obvious to all users. Hyper-V assigns your VM to an address space in the 172.x.x.x range.   CRC also assigns, by default, in the 172.30.0.0/16 address space. This all works well unless Windows assigns your VM an address in 172.30.x.x as well.  Then you get a bunch of networking issues where you can't connect outside the cluster.  This means you can't download images, etc. The solution - reboot your PC and restart CRC until you get an address outside the 172.30.x.x range.

CodeReady Containers inside vs. outside

CodeReady Containers Red Hat has produced a single-node kubernetes install that can run on a single developer's machine (also known by the acronym CRC).  This allows you to spin up a cluster, administrate it, and install your own software to it in an environment you completely control. To kick the tires, I wanted to do the following: Deploy a dead-simple application with one REST endpoint  Be able to access it from outside the cluster (i.e. figure out ingress using ISTIO, not just the OpenShift automatic route) Use an external build process using Maven and Google Jib (I like the buildpack like way that OpenShift provides, but I wanted to start without depending on all of that magic). Google Jib   In order to get jib to work, I needed two things: I had to go searching for how the registry that is bundled into CRC is exposed.  You can find the route in th...

All things come to an end - plan for it

In 2011 I built a PC with an i7 2600k that stood me in good stead until two weeks ago.  I had upgraded disks, memory and video cards over the years, but while upgrading my memory, I must have flexed the 9 year old mother board more than it wanted and I got an ugly sight: That is the CPU fail LED glowing to show me that the computer was dead :-( My how PC building has changed in 9 years!  Due to work commitments I couldn't take the time to build it's replacement, but the folks at MicroCenter hooked me up with a very nice AMD build.  I got it home, double checked that it would POST correctly and I was off to the races. First, I installed my drives from my old PC into the new box, turned it on and nothing.  I forgot to put the boot configuration into compatibility mode!  My old drives were created before UEFI, so I needed to turn that one.  One change and bingo! got the Windows boot screen.  A little nervious waiting while it said that it was configuring ...

Cardinality is critical

One important facet of software design is the cardinality between items in your system design.   As an example, consider a simple credit card based design from telecommunications: Given this design, you can explore many boundary conditions with your business partners. Can you have a customer without services? Can you have billing information without an account?  How about the other way? I'm diagramed this as having multiple BillingInfo entities.  Should they allow overlap?  How about one time payments?  Refunds and chargeback processing? Getting into the details of how the business works will provide you a lot of leading conversations as you develop your system, but I want to focus just the one relationship in dark black above, specifically the qualitative dimensions of the 0 or more accounts. If you think about a mass consumer product, you will have a fairly low cardinality of accounts.  A wireless customer may have a family plan with multiple phones,...

The process of building software

The process of building software follows well defined steps: Source code and other artifacts are compiled and assembled into some sort of deployable artifacts. Primary tools in this spaces are  Simple scripts Make IMake Ant Maven Groovy go build cargo build etc. Quality control steps are applied, either manual or automated: Manual Testing Unit Testing (JUnit, NUnit, cargo test, go test, etc.) Integration Testing System Testing User Testing Load and Performance Testing Security Testing Usability Testing etc. The software is delivered.  This can take many different forms: Manual installation Automatic patching (Patch Tuesday) Made available in a repository like Maven Central, docker hub, etc. The major evolution in this process has not been the steps, but rather the boundaries between them.  When I first started, each and every step of this process was a manual handoff.  Now, large numbers of companies have software automatically migr...

The only constant is change

Perhaps one of the few common tool types for all developers is a change control system.  During my career, I have used a progression of them: SCCS RCS ClearCase CVS and SVN Serena Dimensions Git (GitLab and BitBucket) The key features of a version control system are: Keeping all versions of a file Allowing you to tag or label a set of file versions Supporting concurrent editing by multiple people It's this last item that starts a lot of very passionate points of view.  You will hear statements like: "You should only develop on master" "You should/shouldn't have a branch for feature development" What you need to keep in mind is that the right answer for a particular development environment will depend on many factors.  Let's talk about a few of them. Deployment size: the number of dependencies increases with deployment size.  What is reasonable for a micro-service in a single language for a single deployment is not going to be t...

Lessons from CORBA

In the mid-90's, I got to become experienced in CORBA distributed programming environments.  While it's considered a dead technology with a lot of flaws , I would like to look at it specifically at it from a boundaries point of view. With the advantage of hindsight, we can look at which characteristics of a distributed programming environment: Interface Definition Language (IDL) - is not a deal breaker.  If you look at current environments like gRPC, the use of a language independent definition language allows wide adoption.  The ability to do code generation for your interfaces ensure that you can implement clients and servers with static type checking (if your language supports that). At the same time, JSON is also widely used is REST implementations.  So, another successful alternative is an interpretive on the wire format. Leaking language details into other implementations is not good.  Anyone who implemented a CORBA system would have to learn and und...

Object Oriented Programming is Dead! Long Live Object Oriented Programming!

There have been a bunch of blog entries , with slightly different points of view ,  discussing the decline of object oriented programming.  I think most of them miss the mark communicating the arc of history when it comes to programming paradigms. At a fundamental level, as programming languages have evolved they have improved on two different dimensions: Abstraction - allowing the developer to develop components at higher and higher levels of abstractions. Encapsulation - allowing the developer to hide more and more details from the users of their components. Let me give you a specific example from my C++ days.  At the time, I was using the Oracle Call Interface (OCI) - a C language API to connect to Oracle databases.  Using Perl::DBI, we used code generation to automatically generate C++ classes based on the schema metadata.  The class layout looked something like this: This object oriented design had a few design features that I consider object...

Object Oriented Programming - Use the right objects

In the mid-90's, there was three major object oriented analysis and design methodologies among the leaders in the field: The Booch method - In my opinion, the most technical and exacting of the methods had symbols for things like abstract class es,  parameterized  types, etc. The major problem I saw in using this method was there was less advice in the analysis phase in terms of deciding what should be an object. The Object Modeling Technique (OMT) - This technique, promoted by   Rumbaugh et. al., had a primary goal of being a communication channel with customers.  It also had drawing techniques that seemed to provide a comfortable transition from entity-relationship-drawings (ERD). This gave advice on picking objects - the classical picking out the nouns from a requirements document. The Jacobson method (OOSE) - This method had all of the standard OO techniques like the others, but also added use case influenced design and officially categorizing o...

Informix row level locking - Breaking the process boundary

During the time of my previous post, the version of Informix on the Amdahl mainframe was upgraded to a new version that included row level locking.  Common now, at the time, database vendors were busy figuring out the best ways to perform row level locking. The method that Informix chose had an interesting unexpected feature.  If you had a unique index on a table and you inserted a row, the database implemented a row level lock on the yet to be committed index row by locking the next row in the table.  If there was no index key larger than that, it would lock to the end of the index. This meant that inserting sequential values at the end of the table, a very common occurrence for our system, would in effect behave like a full table lock.   A clean solution to this problem would have been to apply the CQRS pattern so that the inserts into the database could be queued without affecting the user.  But that would have required a complete refactoring of the ...

curses setjmp/longjmp

In 1992, I was maintaining a UNIX inventory system that was developed around a screen template system that used the curses library to implement a pretty standard menu tree with data screens as leafs: My job was to add a new screen, the red box, and the red transition lines to do a master-detail pair of screens.  The way this system worked is that a set of screen definition files would be run through a code generation program to generate the C code that would keep track of the menu path and the fields in the data screens. Each of these screens would have a 80x24 template with field names following by a special replacement character for different field types.  For example, you could have a part of the screen say: Ship Date: MM/DD/YY or Comments: @@@@@@@@@@@@@@@@@@@ and the code generator would create all of the code necessary to define structures that had a ship_date members and a comments member, handle moving from field to field with the tab key, etc. along w...

The Painter and the Thief

I had the privilege of see the documentary  The Painter and the Thief  last weekend.  The film tells the story of what happens when the painter of two stolen works of art befriends one of the people who stole from her. It's an absolutely riveting movie that will have you thinking your watching a drama and not a documentary.   I don't want to say anything more about the events detailed in the movie, but I did marvel at how different a relationship can be when expected boundaries are overcome. If these two people can overcome the expected boundaries between criminal and crime victim, then there should be no reason that DevOps engineers can't do the same.  After all, both sides of that relationship want the same thing for their companies and aren't starting with a crime standing between them. And hopefully, the movie will get a distribution deal so that you can see it and make up your own mind.

Fast Fourier transform is slow

In the early 90's, when I was still getting my undergraduate computer engineering degree, I took a class in signal processing that I really enjoyed.  One of the major projects we were tasked to do was to implement a 2 dimensional fast fourier transform . The class spent several lectures proving that the number of mathematical for a simple implementation for convolution in the spacial domain required O(n^2) math operations.  On the other hand, if you switched to the frequency domain, you required O(n log(n)) operations to perform the FFT, O(n) to perform a single multiplication per pixel and then another O(n log(n)) to convert it back to the spacial domain. We proved it was better! And then we implemented it, and it was orders of magnitude slower.  Theory and practice didn't match. We showed it was slow! The issue was a boundary we had abstracted away in our performance proofs - memory access was not a uniform cost.  The math operations we were optimizing ...

What's in a name?

Naming things in software engineering is difficult , so I pondered what to call this blog for a long time before naming it.  I came up with boundary waters for a few reasons: I grew up near the Boundary Waters Canoe Area , and I have a lot of great memories visiting that majestic part of Minnesota. Software development has a lot of topic areas that are related to boundaries and it was a common theme in a lot of the initial topic areas I'm planning on writing about. There are a lot of different boundaries at the BWCA that I can use for analogies: The US - Canada border Lakes & islands The water itself The .dev domain became available and it's very unlikely to get confused with  the actual geographic area. I'm starting this blog on the one-year anniversary of changing jobs.  I left a company in the telecommunications industry I had spent over 26 years at to go work at Visa.  I now have a systems architecture role that does not include coding on a regu...