I'm Tyler Reckart; a software developer at BoomTown focusing on front end architecture and design.

As tools and methodologies for software development have evolved, the functional programming paradigm has washed over the tools we use across the stack. Haskell has been a go-to tool for developers to use for data analysis for close to three decades. On the other end, languages like TypeScript and Elm have hit the front-end landscape like a wave. Thus allowing developers to develop type-safe applications that compile to browser-readable JavaScript. You've probably heard functional programming advocates discussing all the benefits of these tools and what you can do by putting more information into a type system. And well, they're right. On the front end, if you look closely, you can see how languages like Elm influenced the architecture libraries like Redux.

Statically-typed programming languages give you more control over how you build software.

As a note, I'm not going to spend time covering basic types and primitives here. Understanding higher kinded and dynamically kinded types requires a solid understanding of Haskell's underlying basic type system and their primitives. If you don't have that foundation, this may not be the right article for you. It may end up being more confusing than helpful.

The Higher Kinds

As a quick refresher, at its core, a type is a way of classifying things. We assign types to functions and use them to compare a function's declared type and the values it returns. So, how do we define types in Haskell? If you're not already aware, a value constructor accepts a value and yields a value. In a similar manner, a type constructor accepts a type and yields a type.

data Maybe a
  = Just a
  | Nothing

Above, we declare a type Maybe that consists of two data constructors. Just, which accepts a value of type a, and Nothing, which does not accept any values at all. However, before we go any further, let's discuss those data constructors a bit more in depth.

ghci> :t Just
Just :: a -> Maybe a
ghci> :t Nothing
Nothing :: Maybe a

Just has a function type. This means that any value we give it will become a Maybe of that type. Nothing, however, can conjure up whatever type it wants without needing a value at all. This pattern is actually pretty common, and when we declare a type Nothing :: Maybe a, what we're actually saying is that Nothing is. value of Maybe a for any and all types a that may be provided to it.

Armed with that knowledge, you might be driven to ask: what is the type of Maybe?

ghci> :t Maybe

\<interactive\>:1:1: error:Data constructor not in scope: MaybePerhaps you meant variablemaybe(imported from Prelude)

Hold on a second. What's going on here? Does this mean that types don't have types of their own? Well, sort of. Types have kinds.

ghci> :Kind Maybe
Maybe :: * -> *

What? What is *? * is the kind of types which have values. At its core, what this means is that the kind of Maybe accepts a type that has values. This is called higher kinded polymorphism. As a generalization, higher-minded polymorphism is a method for abstracting types from values. It's analogous to how functions already abstract values from values.

Data HigherKinded a b
 = Bare b
 | Wrapped (a b)

Haskell's kind inference is fairly simple and straightforward. If you scroll back up to the top, you'll see the similarities between it and Haskell's type inference. This is because when we declare a type that applies a function a to a value b, Haskell knows that a must have the kind * -> *, and a type of kind * that returns a type of kind *. In English, this means that HigherKinded accepts two types: the first of which is a function that does not have values itself, but when provided with a type that does have values, those values are then applied to the function. The second argument is simply a type that have values. Finally, it returns a type that can have ordinary values. *Think about how the Maybe type's kind would be defined in this way.*

Dynamically Kinded Programming

Moving on from higher kinded types, let's take a moment to discuss its lesser-known counterpart. The easiest way to discuss dynamically kinded types is through example.

ghci> data Void
ghci> :kind Void
Void :: *

When we declare a type Void and a subsequently check for that type's kind, we're greeted by *. Are you surprised? In the same way that you can productively program at the value level with dynamic types as you learn how to do when building a foundation in functional programming, you can also program at the type level with dynamic kinds.

At its core, that's what * represents.

Now that you understand that, we can start encoding our first type level numbers. As an example, we can start by defining types for Peano natural numbers. Defining our numbers will be as easy as defining two data constructors, Zero and the Succ (Successor) of some natural number.

data Zero
data Succ a

type One = Succ Zero
type Two = Succ One
type Three = Succ Two
type Four = Succ (Succ (Succ (Succ Zero)))

As you can see from the example above, declaring our number types is as easy as using functional application with the two data constructors that we defined above. When defining subsequent types such as Two, or Three, you can observe that we can either reference already defined types to define the new type, or we can use functional application as in the case of the Four type. However, this method isn't without its caveats. There's nothing preventing us from declaring a type with type One = Succ Bool, which doesn't make any sense. In order to get around this, we'll need to introduce more kinds than the mere * kind that we've been dealing with until this point.

Data Kinds

The DataKinds GHC extension allows us to extend the functionality of data constructors into type constructors. As a consequence this also extends the type constructors into kind constructors. Just as with the above information, the best way to convey what this extension allows you to do is through example.

{-# LANGUAGE DataKinds #-}

data Nat = Zero | Succ Nat

The definition above declares a new type Nat that consists of two value constructors, Zero and Succ. The DataKinds extension introduced a new kind Nat, which exists in a separate namespace. In addition to this, we also get two new types, 'Zero, which has the kind of Nat, and 'Succ, which accepts a type of kind Nat.

ghci> :kind 'Zero
'Zero :: Nat
ghci> :kind 'Succ
'Succ :: Nat -> Nat

Fairly simple, right? And as an aside, If we ever need to promote something a level up, we prefix the name with an apostrophe: '. Where it can be ambiguous, the ' is used to disambiguate. Otherwise, Haskell will infer which you mean. After all, if we check the types, they should look the same!

ghci> :type Zero
Zero :: Nat
ghci> :type Succ
Succ :: Nat -> Nat

And that's all that there is to it when it comes to extending our type and kind constructors. We now have the ability to architect pretty basic types and kinds. However, in order to actually use them, though, we'll need to dive in a little deeper.

In the next article, part two of this series on type level programming, we'll discuss how to implement these types with GADTs, Vectors, Type Families, and more.

In 2016, I set out to learn functional programming through Haskell. It was partially because of the challenge of mastering a notoriously difficult to learn language, however, I most drawn to it because of its potential to radically change the way I approach problems in my day-to-day work. While I'm still far from mastering the language, a deeper understanding of functional programming has significantly influenced the way that I work.

One of the concepts that has influenced me the most has been higher order functions (HOFs). In Haskell, functions can take functions as parameters and return a function as a result. HOFs are extremely useful when if comes to performing calculations by defining what stuff is rather than by the steps to perform an arbitrary state change. They're a really powerful tool when it comes to solving problems and thinking about programs.

If you're not already aware, functions in Haskell can officially only take one argument. However, when learning Haskell, you'll often find yourself adding multiple parameters to a function like so:

ghci> max 4 5
5

We can also define a function that takes an argument which then returns a function that takes an argument of its own to apply that function twice.

applyTwice :: (a -> a) -> a -> a
applyTwice f x = f (f x)
ghci> applyTwice (+3) 10
16

So, what's going on here? Both the max and applyTwice functions look like they're taking two parameters. Putting a space between two values in Haskell is functional application. Let's take a look at the applyTwice type declaration one more time.

applyTwice :: (a -> a) -> a -> a

You could also write this as a function applyTwice that takes a function (a -> a), which takes something and returns the same thing. The second parameter and return value are also of that type. This is called currying. You can think of the max function as a function that takes an additional parameter and either returns 4 or that parameter depending on which is bigger.

Another HOF that we can define is the map function. It takes in two inputs - a function, and a list. It then applies the function to every element in the array.

map :: (a->b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs

HOFs are powerful because they make your code more declarative, or rather shorter, simpler, and easier to read. Let's take a moment to step back from Haskell and talk about HOFs in a more familiar context with JavaScript.

Just as with Haskell, the map function is an extremely useful tool for evaluating and applying functions to sets of data.

const users = [
  { name: 'user1', age: 16, description: 'the first user' },
  { name: 'user2', age: 18, description: 'the second user' },
  { name: 'user3', age: 20, description: 'the third user' },
  { name: 'user4', age: 22, description: 'the fourth user' },
];

However, before we get into the map function, let's take a second to think about the old way of evaluating a collection in JavaScript. When I started to learn how to program, for loops were the best way to iterate through objects in a given collection. If we wanted to iterate through the array we defined above and return an array that only contained adults, we would be able to define that function like so:

const getAdults = collection => {
  const results = [];

  for (let i = 0; i < collection.length; i++) {
    const user = collection[i];

    if (user.age >= 18) {
      results.push(user);
    }
  }

  return results;
};

In all reality, the above function will work just fine. However, we can do the exact same thing in a much more declaritive way. Array.Prototype.filter() is a native method that allows us to create a new array that only consists of elements that pass a given logical test.

const getAdults = collection => (
  collection.filter(user => user.age >= 18)
);

getAdults(users); // [{ name: 'user2' ... }, { name: 'user3' ... }, { name: 'user4' ... }]

Look at that! By using a HOF, we were able to reduce the amount of code required to define the getAdults function from 13 to just 3 lines. As another example, what if we wanted to perform an operation on each item in the array while returning the same items in a new array? Well, in that case we'd use Array.Prototype.map().

Let's say that we want to capitalize the name of each user in our array. That can be done easily like so:

const capitalizeNames = collection => (
  collection.map(user => user.name.toUpperCase());
);

getAdults(users); // [{ name: 'USER1' ... }, { name: 'USER2' ... }, { name: 'USER3' ... }, { name: 'USER4' ... }]

And, while those were just a few simple examples of HOFs, I think that they really help to illustrate just how useful these functions are when it comes to performing calculations by defining what stuf is rather than by steps to perform an arbitrary state change.

Over the past few weeks, I've been reading through and studying the problems in Programming Pearls in an effort to better understand algorithms and their design. In the book, there's an excellent example of a binary search implementation. Working through the solution really caught my interest and inspired me to work through a front end implementation of a similar algorithm in JavaScript.

Our binary search algorithm will work by evaluating a value in a set and determining if it is equal to, less than, or greater than the value you are searching for. If the key value is less than the the current index value, the stop index is set to the value of the current index - 1. If the greater than the current index value, the start index is set to the value of the current index + 1. If the key value is equal to the current index value, the search stops.

Let's define a problem to build our algorithm around:

Given a sequential array that contains at most 100 integers, find and log all of the integers between 0 and 100 that aren't in the array.

Now that we have our challenge, we need outline the basic components of our algorithm. For the purposes of this example, we need an algorithm that takes an array and a key value and returns the index of the key value if it is present in the array. If the key value is not found, the function will return -1.

function binarySearch(arr, key) {
  if (arr[index] == key) {
    return index;
  } else {
    return -1;
  }
}

This is the base of our algorithm. However, this code won't work. We need to define our starting and stoping indices as well as the current index value based on the current value of those variables.

function binarySearch(arr, key) {
  var startIndex = 0;
  var stopIndex = arr.length - 1;
  var index = (startIndex + stopIndex) >> 1;

  if (arr[index] == key) {
    return index;
  } else {
    return -1;
  }
}

Our starting index will be set to 0. The stopping index will be dependent on the length of the array that we're evaluating. Our index value will then be calculated by adding our starting and stopping indices together and performing a bitwise operation to find the middle index of our array. We'll use this to perform our calculations.

Now we need to write a conditional statement that executes the increment/decrement operation on our indices while the key value is not equal to the current index value and the starting index is not equal to the stopping index. At every step of our algorithm's execution, the current index value is evaluated, which determines what our algorithm does next.

...
while(arr[index] !== key && startIndex < stopIndex) {

  if (key < arr[index]) {
    stopIndex = index - 1;
  } else if (key > arr[index]) {
    startIndex = index + 1;
  }

  index = (startIndex + stopIndex) >> 1;
}
...

The finished algorithm:

function binarySearch(arr, key) {
  var startIndex = 0;
  var stopIndex = arr.length - 1;
  var index = (startIndex + stopIndex) >> 1;

  while(arr[index] !== key && startIndex < stopIndex) {
    if (key < arr[index]) {
      stopIndex = index - 1;
    } else if (key > arr[index]) {
      startIndex = index + 1;
    }

    index = (startIndex + stopIndex) >> 1;
  }

  if (arr[index] == key) {
    return index;
  } else {
    return -1;
  }
}

Our final conditional statement checks to see if the value was found, and if so, returns the index of the key value. If the value is not found, the function return -1.

Now that we have our algorithm, let's return to our problem statement. We now need to perform a binary search on an array containing 1000 random integers. Let's build a function that takes an empty array and populates it with 1000 random values and then sorts the result to group together any duplicate values.

var myArr = [];

function populateArray() {
  // Generate a random integer between two values
  function generateRandomInteger(min, max) {
    return Math.floor(Math.random() * (max - min) + min);
  }

  // Populate the array
  for (var i = 0; i <= 1000; i++) {
    myArr.push(generateRandomInteger(0, 100));
  }

  // Sort the array
  myArr.sort(function(a, b) {
    return a - b;
  });
}

By running the populateArray function and logging the result, we'll be left with something like this:

console.log(myArr); // [1, 2, 3, ...];

So now that we have our array, let's run our binarySearch function.

binarySearch(myArr, 5); // => 3
binarySearch(myArr, 10); // => 11

Notice that if our array contains duplicate integers, the index value returned by the binarySearch function will be the index of the last occurrence of that integer. Now we've got one final step to answering our question. We need to run the binary search over the entire array and log all of the numbers in our range that don't appear in our array. This can be done pretty simply by including the log inside of our return conditional in the binarySearch function and writing a test function that iterates the function 100 times.

...
if (arr[index] == key) {
    return index;
  } else {
    console.log(`${key} not found!`);
    return -1;
  }
...
function test() {
  for (var i = 0; i <= 1000; i++) {
    binarySearch(myArr, i);
  }
}

Running the test function will give output the following to the console:

0 not found!
2 not found!
6 not found!
7 not found!
...

That's it! The algorithm and test function satisfy all of the requirements of our prompt.

Conclusion

Searching arrays with a binary search function is an extremely efficient way to do so, because the maximum number of comparisons is limited by the conditions of our algorithm to O(log(n)). This is in contrast to a more traditional linear search method of indexOf which is has an efficiency of O(n). After averaging 100 test cases, the average execution time of our binary search function was a mere 0.0501 milliseconds in Chrome.

If you'd like to see my code, you can view it on Github.