I'm Tyler Reckart; a software developer at BoomTown focusing on front end architecture and design.

Speaking Elvish

June 21st 2018

One of my passions in programming is for the console. Yes, the dim, graphically uninteresting block hole of the command-line. As a software developer, nearly all of my work is done inside of a terminal window. Aside from the occasional foray into VS Code, most of my work is done on or around the command-line. This encompasses everything from running and inspecting local Docker containers to UI development with React in vim. Because of this, I tend to think a lot about how to extend and customize the terminal. Particularly about how to make my work more productive and enjoyable.

For the past few years, zsh has been my default login shell of choice. As a devout Unix user, the z shell is a fairly easy choice when it comes to an easy to use and extensible bash alternative. I've even gone so far as to develop my own themes to further customize my shell. One of those themes has even gained a little popularity over the years.

It's a fast and stable choice when it comes to your home on the command-line. But, I think that sometimes we need to go out on a limb and try out something interesting.

This brings me to a few weeks ago when I discovered Elvish. It looked like an interesting project with surprisingly thorough documentation. Although, a quick summary on my end wouldn't do the language justice. Here's how the Language's developer describes it:

"Elvish is a cross-platform shell, supporting Linux, BSDs and Windows. It features an expressive programming language, with features like namespacing and anonymous functions, and a fully programmable user interface with friendly defaults. It is suitable for both interactive use and scripting."

Here are a few of the things that initially drew me into the project:

  • The Elvish Programming Language is clean, expressive, and powerful. An easy to comprehend syntax makes picking up the language a lot easier than you might expect. It supports name spaces, exception handling, and proper data structures.
  • It already has a well-documented, rich library of built-in functions to aid in the development of your project's architecture.
  • Awesome interactive features like as-you-type syntax checking and highlighting, custom completions, directory & command history and more. There's even a built-in file explorer.

With the goal of configuring my own prompt in mind, I set out to learn Elvish. Building off of my previous shells, I used that time to develop Gondolin[1]. It's what I see as the next evolution of the git-centered prompt workflow that I have been developing over the past few years. What set the development of this prompt apart from the others was the difficulty involved. Elvish is still a very young language, and thus has a small ecosystem of libraries and modules.

I didn't have years of established methods to draw on. But, that didn't stop me. By grokking the implementations of these features in a shell I was already familiar with, I was able to use Elvish to completely reimplement features like a branch status readout or an actively-updated colorized timestamp of the last commit.

In the development of this prompt, I've really come to love Elvish. In fact, it's now my default login shell. To reiterate, the language is still in its infancy. The syntax is still a little wonky and backwards-incompatible changes still occur with relative frequency. If you're looking for absolute stability, Elvish might not be for you just yet.

With that being said, so far I am very happy with Elvish, and if this sort of thing interests you I encourage you to take a look. I'm looking forward to watching and contributing to the development of Elvish going forward.


  1. Gondolin was a city in the J. R. R. Tolkien legendarium, founded by the Elves. Found and founded with divine inspiration, it is hidden by mountains and endures for centuries before being betrayed and destroyed.

IWC Tribute To Pallweber Edition ”150 Years”

Anyone that knows me well can tell you that aside from computers and technology, one of my greatest passions is for mechanical watches. There’s just something about the ability to take mechanical energy, and through a series of gears and regulation mechanisms, take that energy and use it to accurately measure time that captivates me. Now, I’m fully aware that mechanical watches were made all but obsolete in the 70s at the height of the quartz revolution. Quartz watches are cheap, durable, and easy to read. So, they’re an easy choice.

Mechanical watches and digital watches really couldn’t occupy more different corners of the watchmaking industry, right? Well, not exactly. There’s a strange overlap, a sort of twilight zone that exists in the middle where mechanical becomes digital and vice versa. You won’t find a dot matrix screen, a battery, or a quartz crystal in these watches. The term digital doesn’t actually refer to an electronic display. The term stems from digitus manūs, Latin for digit of the hand. Over the centuries the term was adapted to then mean any number from zero to ten. It wasn’t until the early days of computing, when computers abandoned vacuum tubes for discrete digits (the ones and zeros of binary), that term became synonymous with computing.

So, when we talk about mechanical digital wristwatches, what we’re actually talking about is a watch that displays the time in the aforementioned digits, not through an electronic display. Surprisingly enough, if you look back at the history of mechanical watchmaking, digital displays aren’t anything new. We can observe examples of these displays fitted in pocket watches that date back all the way to the late 19th century.

An original IWC Pallweber pocket watch, mid-1880s

One of the most significant wristwatches today to feature a mechanical digital display is the A. Lange & Söhne Zeitwerk. The Zeitwerk was Lange’s first watch with a digital display; and in typical Lange fashion, inside of the Zeitwerk is a patented jumping minutes mechanism that unlike earlier iterations of the mechanical digital display that relied on a creeping minutes display. In fact, the Zeitwerk was the first watch to feature both a jumping mechanical digital display and the first watch to orient that display from left to right.

A. Lange & Söhne Zeitwerk

Now, the jumping hour mechanism isn’t anything new. Pocket watches like the original IWC Pallweber had it back in the late 19th century. Traditionally these complications featured a jumping hour display with a creeping minutes display that would often display the numerals at odd/undesirable angles. In the Zeitwerk and most recent iteration of the IWC Pallweber, the minute displays jump instantly, and while that sounds easy enough, just consider the amount of instantaneous torque required to move those discs. A typical jumping complication like a date disk advances only once per day. The three disks needed to display the hour and the two digits for the minutes require a total of 1,608 advances per day. Naturally, the Zeitwerk has a thicker mainspring to compensate for the amount of torque the barrel needs to generate.

Annotated A. Lange & Söhne Calibre L043.1

In addition, Lange’s design reverses the conventional principle of winding up and winding down. A high-friction bearing bearing for the spring barrel is used to wind the movement. This change ensures that the barrel wheel can use low-friction bearings to turn as the watch winds down, leaving more energy in the mainspring. All the while, as the barrel winds down, a the Zeitwerk’s remontoire system acts as a mechanical capacitor. It gains, stores, and releases the energy required to move each disc every 60 seconds. It works like a second mainspring, wound by the movement that locks into place over the course of 60 seconds. At the end of each minute, a y-shaped lever slips free and releases the pent up torque required to turn the hours and minutes displays. However, this sudden release of energy needs an outlet to prevent damaging the movement. Lagne has this covered too with the use of a one step fly vane. A component that acts as a tiny revolving door — creating air resistance to slow the action of the movement while still allowing the disk to advance in a fraction of a second. It’s an incredibly complex and truly astonishing mechanism.

Remontoire systems are used as a means of evening out power distribution to the escapement. With a remontoire, the only force applied to the escapement is that of the remontoire’s spring. This isolates the escapement from variations in power coming from the mainspring or drive train, which in turn is used to wind the remontoire’s spring as it drives the escapement. This is a sort of secondary mainspring. The remontoire’s spring is then used to provide an almost unvarying amount of torque to the escapement. This then allows the escapement to provide a consistent amount of torque needed to turn each of the display dials in a fraction of a second.

Like all of Lange’s timepieces, the Zeitwerk is a bastion of mechanical excellence. However, it isn’t the only watch on the market with a mechanical digital display worth mentioning. The IWC Tribute to Pallweber, Edition “150 Years” takes a more traditional approach and as the name implies, pays tribute to the Pallweber pocket watches of the late nineteenth century. Like the Zeitwerk, the Pallweber implements a jumping hour and minute display. However, unlike the Zeitwerk, the Pallweber’s power distribution isn’t handled through a remontoire system. The IWC Calibre 94200 has two barrels, one hand-wound and one automatic that power two seperate gear trains. This allowed the movement designers to separate their power delivery concerns for each display. Because of the double-barrel design, the Pallweber has a hefty 60 hour power reserve. The Zeitwerk caps its mainspring at 36 hours, cutting you off before you pass the point in the mainspring’s power delivery curve where the remontoire spring is no longer being wound. Neither of these watches conform to the traditional notions of a mechanical wristwatch. I think that’s what interests me the most about this complication.

As tools and methodologies for software development have evolved, the functional programming paradigm has washed over the tools we use across the stack. Haskell has been a go-to tool for developers to use for data analysis for close to three decades. On the other end, languages like TypeScript and Elm have hit the front-end landscape like a wave. Thus allowing developers to develop type-safe applications that compile to browser-readable JavaScript. You've probably heard functional programming advocates discussing all the benefits of these tools and what you can do by putting more information into a type system. And well, they're right. On the front end, if you look closely, you can see how languages like Elm influenced the architecture libraries like Redux.

Statically-typed programming languages give you more control over how you build software.

As a note, I'm not going to spend time covering basic types and primitives here. Understanding higher kinded and dynamically kinded types requires a solid understanding of Haskell's underlying basic type system and their primitives. If you don't have that foundation, this may not be the right article for you. It may end up being more confusing than helpful.

The Higher Kinds

As a quick refresher, at its core, a type is a way of classifying things. We assign types to functions and use them to compare a function's declared type and the values it returns. So, how do we define types in Haskell? If you're not already aware, a value constructor accepts a value and yields a value. In a similar manner, a type constructor accepts a type and yields a type.

data Maybe a
  = Just a
  | Nothing

Above, we declare a type Maybe that consists of two data constructors. Just, which accepts a value of type a, and Nothing, which does not accept any values at all. However, before we go any further, let's discuss those data constructors a bit more in depth.

ghci> :t Just
Just :: a -> Maybe a
ghci> :t Nothing
Nothing :: Maybe a

Just has a function type. This means that any value we give it will become a Maybe of that type. Nothing, however, can conjure up whatever type it wants without needing a value at all. This pattern is actually pretty common, and when we declare a type Nothing :: Maybe a, what we're actually saying is that Nothing is. value of Maybe a for any and all types a that may be provided to it.

Armed with that knowledge, you might be driven to ask: what is the type of Maybe?

ghci> :t Maybe

\<interactive\>:1:1: error:Data constructor not in scope: MaybePerhaps you meant variablemaybe(imported from Prelude)

Hold on a second. What's going on here? Does this mean that types don't have types of their own? Well, sort of. Types have kinds.

ghci> :Kind Maybe
Maybe :: * -> *

What? What is *? * is the kind of types which have values. At its core, what this means is that the kind of Maybe accepts a type that has values. This is called higher kinded polymorphism. As a generalization, higher-minded polymorphism is a method for abstracting types from values. It's analogous to how functions already abstract values from values.

Data HigherKinded a b
 = Bare b
 | Wrapped (a b)

Haskell's kind inference is fairly simple and straightforward. If you scroll back up to the top, you'll see the similarities between it and Haskell's type inference. This is because when we declare a type that applies a function a to a value b, Haskell knows that a must have the kind * -> *, and a type of kind * that returns a type of kind *. In English, this means that HigherKinded accepts two types: the first of which is a function that does not have values itself, but when provided with a type that does have values, those values are then applied to the function. The second argument is simply a type that have values. Finally, it returns a type that can have ordinary values. *Think about how the Maybe type's kind would be defined in this way.*

Dynamically Kinded Programming

Moving on from higher kinded types, let's take a moment to discuss its lesser-known counterpart. The easiest way to discuss dynamically kinded types is through example.

ghci> data Void
ghci> :kind Void
Void :: *

When we declare a type Void and a subsequently check for that type's kind, we're greeted by *. Are you surprised? In the same way that you can productively program at the value level with dynamic types as you learn how to do when building a foundation in functional programming, you can also program at the type level with dynamic kinds.

At its core, that's what * represents.

Now that you understand that, we can start encoding our first type level numbers. As an example, we can start by defining types for Peano natural numbers. Defining our numbers will be as easy as defining two data constructors, Zero and the Succ (Successor) of some natural number.

data Zero
data Succ a

type One = Succ Zero
type Two = Succ One
type Three = Succ Two
type Four = Succ (Succ (Succ (Succ Zero)))

As you can see from the example above, declaring our number types is as easy as using functional application with the two data constructors that we defined above. When defining subsequent types such as Two, or Three, you can observe that we can either reference already defined types to define the new type, or we can use functional application as in the case of the Four type. However, this method isn't without its caveats. There's nothing preventing us from declaring a type with type One = Succ Bool, which doesn't make any sense. In order to get around this, we'll need to introduce more kinds than the mere * kind that we've been dealing with until this point.

Data Kinds

The DataKinds GHC extension allows us to extend the functionality of data constructors into type constructors. As a consequence this also extends the type constructors into kind constructors. Just as with the above information, the best way to convey what this extension allows you to do is through example.

{-# LANGUAGE DataKinds #-}

data Nat = Zero | Succ Nat

The definition above declares a new type Nat that consists of two value constructors, Zero and Succ. The DataKinds extension introduced a new kind Nat, which exists in a separate namespace. In addition to this, we also get two new types, 'Zero, which has the kind of Nat, and 'Succ, which accepts a type of kind Nat.

ghci> :kind 'Zero
'Zero :: Nat
ghci> :kind 'Succ
'Succ :: Nat -> Nat

Fairly simple, right? And as an aside, If we ever need to promote something a level up, we prefix the name with an apostrophe: '. Where it can be ambiguous, the ' is used to disambiguate. Otherwise, Haskell will infer which you mean. After all, if we check the types, they should look the same!

ghci> :type Zero
Zero :: Nat
ghci> :type Succ
Succ :: Nat -> Nat

And that's all that there is to it when it comes to extending our type and kind constructors. We now have the ability to architect pretty basic types and kinds. However, in order to actually use them, though, we'll need to dive in a little deeper.

In the next article, part two of this series on type level programming, we'll discuss how to implement these types with GADTs, Vectors, Type Families, and more.

Higher Order Functions

March 24th 2018

In 2016, I set out to learn functional programming through Haskell. It was partially because of the challenge of mastering a notoriously difficult to learn language, however, I most drawn to it because of its potential to radically change the way I approach problems in my day-to-day work. While I'm still far from mastering the language, a deeper understanding of functional programming has significantly influenced the way that I work.

One of the concepts that has influenced me the most has been higher order functions (HOFs). In Haskell, functions can take functions as parameters and return a function as a result. HOFs are extremely useful when if comes to performing calculations by defining what stuff is rather than by the steps to perform an arbitrary state change. They're a really powerful tool when it comes to solving problems and thinking about programs.

If you're not already aware, functions in Haskell can officially only take one argument. However, when learning Haskell, you'll often find yourself adding multiple parameters to a function like so:

ghci> max 4 5
5

We can also define a function that takes an argument which then returns a function that takes an argument of its own to apply that function twice.

applyTwice :: (a -> a) -> a -> a
applyTwice f x = f (f x)
ghci> applyTwice (+3) 10
16

So, what's going on here? Both the max and applyTwice functions look like they're taking two parameters. Putting a space between two values in Haskell is functional application. Let's take a look at the applyTwice type declaration one more time.

applyTwice :: (a -> a) -> a -> a

You could also write this as a function applyTwice that takes a function (a -> a), which takes something and returns the same thing. The second parameter and return value are also of that type. This is called currying. You can think of the max function as a function that takes an additional parameter and either returns 4 or that parameter depending on which is bigger.

Another HOF that we can define is the map function. It takes in two inputs - a function, and a list. It then applies the function to every element in the array.

map :: (a->b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs

HOFs are powerful because they make your code more declarative, or rather shorter, simpler, and easier to read. Let's take a moment to step back from Haskell and talk about HOFs in a more familiar context with JavaScript.

Just as with Haskell, the map function is an extremely useful tool for evaluating and applying functions to sets of data.

const users = [
  { name: 'user1', age: 16, description: 'the first user' },
  { name: 'user2', age: 18, description: 'the second user' },
  { name: 'user3', age: 20, description: 'the third user' },
  { name: 'user4', age: 22, description: 'the fourth user' },
];

However, before we get into the map function, let's take a second to think about the old way of evaluating a collection in JavaScript. When I started to learn how to program, for loops were the best way to iterate through objects in a given collection. If we wanted to iterate through the array we defined above and return an array that only contained adults, we would be able to define that function like so:

const getAdults = collection => {
  const results = [];

  for (let i = 0; i < collection.length; i++) {
    const user = collection[i];

    if (user.age >= 18) {
      results.push(user);
    }
  }

  return results;
};

In all reality, the above function will work just fine. However, we can do the exact same thing in a much more declaritive way. Array.Prototype.filter() is a native method that allows us to create a new array that only consists of elements that pass a given logical test.

const getAdults = collection => (
  collection.filter(user => user.age >= 18)
);

getAdults(users); // [{ name: 'user2' ... }, { name: 'user3' ... }, { name: 'user4' ... }]

Look at that! By using a HOF, we were able to reduce the amount of code required to define the getAdults function from 13 to just 3 lines. As another example, what if we wanted to perform an operation on each item in the array while returning the same items in a new array? Well, in that case we'd use Array.Prototype.map().

Let's say that we want to capitalize the name of each user in our array. That can be done easily like so:

const capitalizeNames = collection => (
  collection.map(user => user.name.toUpperCase());
);

getAdults(users); // [{ name: 'USER1' ... }, { name: 'USER2' ... }, { name: 'USER3' ... }, { name: 'USER4' ... }]

And, while those were just a few simple examples of HOFs, I think that they really help to illustrate just how useful these functions are when it comes to performing calculations by defining what stuf is rather than by steps to perform an arbitrary state change.

Over the past few weeks, I've been reading through and studying the problems in Programming Pearls in an effort to better understand algorithms and their design. In the book, there's an excellent example of a binary search implementation. Working through the solution really caught my interest and inspired me to work through a front end implementation of a similar algorithm in JavaScript.

Our binary search algorithm will work by evaluating a value in a set and determining if it is equal to, less than, or greater than the value you are searching for. If the key value is less than the the current index value, the stop index is set to the value of the current index - 1. If the greater than the current index value, the start index is set to the value of the current index + 1. If the key value is equal to the current index value, the search stops.

Let's define a problem to build our algorithm around:

Given a sequential array that contains at most 100 integers, find and log all of the integers between 0 and 100 that aren't in the array.

Now that we have our challenge, we need outline the basic components of our algorithm. For the purposes of this example, we need an algorithm that takes an array and a key value and returns the index of the key value if it is present in the array. If the key value is not found, the function will return -1.

function binarySearch(arr, key) {
  if (arr[index] == key) {
    return index;
  } else {
    return -1;
  }
}

This is the base of our algorithm. However, this code won't work. We need to define our starting and stoping indices as well as the current index value based on the current value of those variables.

function binarySearch(arr, key) {
  var startIndex = 0;
  var stopIndex = arr.length - 1;
  var index = (startIndex + stopIndex) >> 1;

  if (arr[index] == key) {
    return index;
  } else {
    return -1;
  }
}

Our starting index will be set to 0. The stopping index will be dependent on the length of the array that we're evaluating. Our index value will then be calculated by adding our starting and stopping indices together and performing a bitwise operation to find the middle index of our array. We'll use this to perform our calculations.

Now we need to write a conditional statement that executes the increment/decrement operation on our indices while the key value is not equal to the current index value and the starting index is not equal to the stopping index. At every step of our algorithm's execution, the current index value is evaluated, which determines what our algorithm does next.

...
while(arr[index] !== key && startIndex < stopIndex) {

  if (key < arr[index]) {
    stopIndex = index - 1;
  } else if (key > arr[index]) {
    startIndex = index + 1;
  }

  index = (startIndex + stopIndex) >> 1;
}
...

The finished algorithm:

function binarySearch(arr, key) {
  var startIndex = 0;
  var stopIndex = arr.length - 1;
  var index = (startIndex + stopIndex) >> 1;

  while(arr[index] !== key && startIndex < stopIndex) {
    if (key < arr[index]) {
      stopIndex = index - 1;
    } else if (key > arr[index]) {
      startIndex = index + 1;
    }

    index = (startIndex + stopIndex) >> 1;
  }

  if (arr[index] == key) {
    return index;
  } else {
    return -1;
  }
}

Our final conditional statement checks to see if the value was found, and if so, returns the index of the key value. If the value is not found, the function return -1.

Now that we have our algorithm, let's return to our problem statement. We now need to perform a binary search on an array containing 1000 random integers. Let's build a function that takes an empty array and populates it with 1000 random values and then sorts the result to group together any duplicate values.

var myArr = [];

function populateArray() {
  // Generate a random integer between two values
  function generateRandomInteger(min, max) {
    return Math.floor(Math.random() * (max - min) + min);
  }

  // Populate the array
  for (var i = 0; i <= 1000; i++) {
    myArr.push(generateRandomInteger(0, 100));
  }

  // Sort the array
  myArr.sort(function(a, b) {
    return a - b;
  });
}

By running the populateArray function and logging the result, we'll be left with something like this:

console.log(myArr); // [1, 2, 3, ...];

So now that we have our array, let's run our binarySearch function.

binarySearch(myArr, 5); // => 3
binarySearch(myArr, 10); // => 11

Notice that if our array contains duplicate integers, the index value returned by the binarySearch function will be the index of the last occurrence of that integer. Now we've got one final step to answering our question. We need to run the binary search over the entire array and log all of the numbers in our range that don't appear in our array. This can be done pretty simply by including the log inside of our return conditional in the binarySearch function and writing a test function that iterates the function 100 times.

...
if (arr[index] == key) {
    return index;
  } else {
    console.log(`${key} not found!`);
    return -1;
  }
...
function test() {
  for (var i = 0; i <= 1000; i++) {
    binarySearch(myArr, i);
  }
}

Running the test function will give output the following to the console:

0 not found!
2 not found!
6 not found!
7 not found!
...

That's it! The algorithm and test function satisfy all of the requirements of our prompt.

Conclusion

Searching arrays with a binary search function is an extremely efficient way to do so, because the maximum number of comparisons is limited by the conditions of our algorithm to O(log(n)). This is in contrast to a more traditional linear search method of indexOf which is has an efficiency of O(n). After averaging 100 test cases, the average execution time of our binary search function was a mere 0.0501 milliseconds in Chrome.

If you'd like to see my code, you can view it on Github.