Which Programming Languages Are Functional?

In part one of this post, I defined functional programming not from an academic perspective, or a marketing one, but in a way that will make sense to a jobbing programmer. More importantly, I hope, I defined what side-effects are in a way that makes it easy for a jobbing programmer to spot them before they spiral out of control.

Now, let's take a look at functional programming languages in the real world...

Pot Shots Across the Programming Landscape

Armed with the ability to spot side-effects, we can now look at a given function and spot hidden complexity. And armed with some real-world definitions of FP, we can now survey the programming world and fire some new insights in practically every direction...

Functional Programming is not...

It's not map and reduce

Even though you'll see these functions in every functional language, it's not what makes the language functional. It's just something that crops up nearly every time you try to take the side-effects out of processing sequences of things.

It's not lambda functions

Again, you'll probably see first-class functions in every FP language. But it's something that naturally emerges when you start building a language that avoids side-effects. It's an enabler, but not the root cause.

It's not types

Static type checking is a very useful tool, but it's not a pre-requisite for FP. Lisp is the oldest functional programming language, and the oldest dynamic language.

Static types can be very useful though. Haskell uses its type system beautifully in the attack on side-effects. But they aren't the ingredient that makes or breaks a functional language.

Say it long, say it loud, functional programming is about side-effects.

(And side-causes, of course).

What Does This Mean For Languages?

JavaScript is not a Functional Programming Language

Functional languages help you eliminate side-effects where you can, and control them where you can't. JavaScript doesn't meet this criteria. In fact, it's easy to spot places where JavaScript actively encourages side-effects.

The easiest target is this. The hidden input that's in every function. What's particularly magical about this is how freely its meaning changes. Even expert JavaScript programmers have trouble keeping track of what this currently refers to. From a functional point of view, the fact that it's magically available at all is a design smell.

While you can certainly load FP libraries (Immutable.js, for instance) into JavaScript, and that makes programming in a functional style easier, it doesn't change the nature of the language itself.

(By the way, if you like the functional libraries that are gaining popularity in JavaScript land, imagine how much you'd like a whole language that supported the functional style.)

Java is not a Functional Programming Language

Java is most definitely not a functional language. The addition of lambdas in Java 1.8 does nothing to change that. Java stands in stark opposition to functional programming. It's core design principal says that code should be organised as a series of localised side-effects - methods that depend on and change the local state of an object.

In fact, Java is hostile to functional programming. If you write Java code that has no side-effects, that doesn't read or change the local object's state, you'll be called a Bad Programmer. That's not how Java is written. Your side-effect free code will be peppered with static keywords, and they'll frown and drive you out of town.

I'm not saying Java is wrong. (Well, okay, I am.) But the point is it takes a completely different view of side-effects. Java thinks that localised side-effects are cornerstone of good code; FP thinks they're the devil.

You could look at that from a slightly different angle. You could see both Java and FP as responses to the problem of side-effects. Both models recognise side-effects as a problem, and respond differently. OO's answer is, "contain them within boundaries called 'objects'," whereas FP's answer is, "eliminate them". Unfortunately, in practice Java doesn't just try to encapsulate side-effects; it mandates them. If you're not creating side-effects, in the form of stateful objects, then you're a Bad Java Programmer. People actually get fired for writing static too often.

Scala Has a Big Task on its Hands

Seen in this light, Scala is a very challenging proposition. If its goal is to unify the two worlds of OO and FP, then through the lens of side-effects we see it as trying to bridge the gap between "Side-Effects Mandatory" and "Side-Effects Forbidden1". They're such opposite views that I'm not sure they can be reconciled. You certainly can't unify the two just by making objects support the map function. You'll need to go deeper and reconcile the conflict between the two opposing stances on side-effects.

I'll leave you to judge if Scala succeeds in such a reconciliation. But if I were in charge of Scala's marketing I'd sell it as a gradual move away from the side-effecting world of Java, to the pure world of FP. Instead of unifying them, it could be a bridge away. Indeed, many people see it that way in practice.

Clojure

Clojure takes an interesting stance on side-effects. Its creator, Rich Hickey, has said that Clojure is about "80% functional". I think I can clarify why that is. From the outset, Clojure was designed to deal with one specific kind of side-effect: Time.

To illustrate this, here's a Java joke for you:

#+BEGIN_QUOTE

  • What's 5 plus 2?
  • Correct. What's 5 plus 3?
  • 8
  • Nope. It's 10, because we turned 5 into 7, remember? #+END_QUOTE

Okay, it's not a great joke. But the point is, in Javaland, values don't stay still. We might legitimately take something that represents a five, call a function, and find that it's no longer a five. Mathematics says that five never changes - we can call a function that gives us a new value, but we can never affect the nature of five itself. Java says values change all the time, and as long as they're wrapped in object boundaries that's okay.

The integer case may seem trivial, but the effect is amplified when we look at larger values. Remember InboxQueue from part 1? The state of InboxQueue is a value that changes over time. We can say that Time is a side-cause to the meaning of InboxQueue.

Clojure focuses ferociously on the side-cause of Time. Rich Hickey's insight2 is that the hidden effect of time means we can't rely on values to stay put; and if we can't rely on that, we can't rely on our functions' inputs, and so we can't rely on anything to behave predictably or repeatably. If even values have side-effects, then everything has side-effects. If values aren't pure, nothing in our programs can be pure.

So Clojure takes a sword to time. All its values are immutable (unchanging over time) by default. If you need a changing value, Clojure provides wrappers around unchanging values, and those wrappers are subject to heavy constraints:

  • You must opt-in to changing (mutable) values by using a wrapper.
  • You cannot accidentally create a mutable value. You must always use guards in the language explicitly flag potential side-effects.
  • You cannot unwittingly consume a mutable-value. You must always use guards in the language explicitly acknowledge the risk of side-effects.
  • When you open a mutable-value wrapper, the thing you get back is immutable again. You can easily get back out of the time-dependent world and back into the pure one.

With respect to time, Clojure is a great example of a functional programming language. The language is deeply hostile to the side-effect of time. It eliminates it wherever it can, by default, and where you feel you must have the side-effect, it helps you tightly control it so it doesn't spill into the rest of your program.

Haskell

If Clojure is hostile to time, Haskell is just plain hostile. Haskell really hates side-effects, and puts a large amount of effort into controlling them.

One of the interesting ways Haskell fights side-effects is with types. It pushes all side-effects up into the type-system. For example, imagine you've got a getPerson function. In Haskell it might look like:

getPerson :: UUID -> Database Person

You can read that as, "takes a UUID and returns a Person in the context of a Database". This is interesting - you can look at the type signature of a Haskell function and know for certain which side-effects are involved. And which aren't. You can also make guarantees like, "this function won't access the filesystem, because it's not declared that kind of side-effect." Tight control5.

Equally important, you can look at a function like:

formatName :: Person -> String

...and know that this just takes a Person and returns a String. Nothing else, because if there were side-effects you'd see them locked into the type signature.

But perhaps most interesting of all, is this example:

formatName :: Person -> Database String

The signature tells us that this version of formatName involves database-related side-effects. What the hell? Why does formatName need the database? You mean I'm going to need to set-up and mock-out a database just to test a name-formatter? That's really weird.

Just by looking at this function signature, I can see something's wrong with the design. I don't need to look at the code, I can smell a rat just from the overview. That's magic.

Let's just briefly compare that with the Java signature:

public String formatName(Person person) {..}

Which Haskell-version is that equivalent to? Without seeing the body of the function, you have no way of knowing. It may be the pure version, it may access the database. Or it may delete the filesystem and return, "screw you boss!". The type signature tells you very little about what's going on, or what the surface area of the function is.

Haskell's type signatures, in contrast, tell you a great deal about the design. And because they're checked by the compiler, they tell you things you know to be true. And that means they make great architectural tools. They surface design-smells at a very high level, And they also surface patterns of coding too. I'm going to keep the words 'functor' and 'monad' out of this post, but I will say that high-level software patterns begin with high-level analysis, and high-level analysis is made much easier when you have a high-level notation3.

Perl

Perl deserves a mention here in any discussion of side-effects. It has a magic argument, $_, which means something like, "the return value of the previous call4." It gets used and/or changed by many of the core library functions, implicitly. As far as I know this gives Perl the distinction of being the only language where one global side-effect is considered a core feature.

Python

Let's take a quick look at a fundamental side-effecting pattern in Java:

public String getName() {
  return this.name;
}

How would we purify this call? Well, this is the hidden input, so all we have to do is lift it up to an argument:

public String getName(Person this) {
  return this.name;
}

Now getName is a pure function. It's noteworthy that Python adopts this second pattern by default. In python, all object methods take this as the first argument, except by convention they call it self:

def getName(self):
    self.name

Explicit is better than implicit indeed.

Mocking

Mocking frameworks usually do two things.

The first is they help you set up value objects to act as inputs. The harder your language makes it to set up complex values, the more useful you'll find this. But that's an aside.

The second is more interesting in this discussion - they help you set up the right side-causes to the function under test, and track that the right side-effects have occurred after the test.

Seen through the lens of side-effects, mocks are a flag that your code is impure, and in the functional programmer's eye, proof that something is wrong. Instead of downloading a library to help us check the iceberg is intact, we should be sailing around it.

A hardcore TDD/Java guy once asked me how you do mocking in Clojure. The answer is, we usually don't. We usually see it as a sign we need to refactor our code.

Design Smells (or The Scent Of Nothingness)

If there were an I-Spy book of side-effects, the two easiest targets to spot would be functions that take no arguments, and functions that return no value.

No Arguments Signal Side-Causes

Whenever you see a function with no arguments, one of two things are true: Either it always returns exactly the same value, or it's getting its inputs from elsewhere (ie. it has side-causes).

For example, this function must always, always return the same integer (or it has side-causes):

public Int foo() {}

No Return Value Signals Side-Effects

And whenever you see a function with no return value, then either it has side-effects, or there was no point calling it:

public void foo(...) {...}

According to that function signature, there is absolutely no reason to call this function. It doesn't give you anything. The only reason to call it is for the magical side-effects it promises it will silently cause.

Summary / Conclusion-y Thing

A real, intuitive awareness of side-effects will change the way you look at coding. It will change everything from how you look at individual functions, right up to overall systems-architecture. It will change the way you look at programming languages, tools and techniques. It changes everything. Go kill a side-effect today...

Footnotes


  1. Yes, I'm conflating OO and Java. In the context of Scala I think it's fair to equate the two.
  2. One of them!
  3. PureScript takes this idea further, and is worth looking into.
  4. I've had some great Clojure design discussions where we've used Haskell signatures to explain ourselves, verify the consistency of the design, and to summarise our conclusions. Yes, Clojure discussions. Haskell's notation has a value that extends far beyond the language.
  5. See man perlvar for the exact definition.