In this world, agility and speed of application delivery is all that matters. But, I often find myself stuck in the trap of networking. How will my microservices communicate? That’s the burning question. The worst part? We go through the same pain each time we start a project.

But microservices was supposed to make us more agile right? It feels like it’s making our life a bit harder.

Before we get into how a function mesh makes writing microservices easier, I’d like to talk about what problems it’s trying to address. If you’re frustrated enough with the current practises regarding microservices, feel free to skip the next bit and dive straight into the solution.

What’s wrong with Microservices?

Nothing actually.

I absolutely love microservices. But due to their nature certain things become really difficult. Here are a few which particularly stand out.


The entire point of microservices is to help abstract a particular resource.

For example, a billing service must do just that. It has complete control over that portion of the database. But this also means no other service must implement or share the same functionality. The billing service will need to be as generic as possible to accommodate all access patterns without leaking logic out in other microservices.

This gets really difficult to reason out, specially if the microservice dependency tree is deep.


How will these microservices communicate? If you are planning to stick with our good old friend HTTP, then how will they discover each other?

These are some questions which you need to address before any development starts. The worst part, they are not even a part of our business logic.

Getting microservice communication patterns right is tricky. If not done correctly, it can severely restrict, or sometimes even break, our application.

Load Balancing

I feel many people seem to not talk about this. It kinda comes under the banner of communication.

Sure you might have a 100 microservices running at any given point in time. But you probably even have 3 to 5 instances of each microservice running in parallel. How do you make sure they have more or less the same load.

Sure you can integrate with service discovery systems like Consul. But that’s just an extra piece you need to explicitly integrate with.

Monitoring and Access Control

So you’ve pushed a few new updates. All your tests were passing. But suddenly your requests are failing. Failures are cascading. Everything is on fire and you have no idea why.

I feel you if something like this has happened before. If it hasn’t, embrace yourself, because it can.

You need an efficient mechanism which helps you monitor exactly which api call fails.

A service mesh does help a lot. But it feels an extra piece you need to learn and maintain.

In Comes the Function Mesh

A function mesh is very similar to a service mesh. It provides you with access control on who can access what. Helps you discover other microservices. It also has in-build monitoring to make operations a hell lot easier.

Let me also add that function mesh is a concept we are introducing with Space Cloud.

So What’s the Difference?

Instead of exposing functionality as HTTP endpoints, you expose them as functions.

Why functions?

Functions are the most composable programming construct. Okay, it might not be at the top of that list. But functions are super flexible to use. Most importantly, that’s the first thing we learn when we start coding. Reasoning about functionality as functions does make things a whole lot easier.


Let me show you a code snippet of a Space Function to add two numbers:

// Initialize api with the project name and url of space cloud
const { API } = require('space-api');
const api = new API('demo-app', 'http://localhost:4122');

// Make a service
const service = api.Service('arithmetic');

// Create a function to add two numbers
const add = (params, auth, cb) => {
  // Add the numbers received from the client
  const sum = params.num1 + prams.num2;

  // Send back a response
  cb('response', { result: sum })

// Register function with the service
service.registerFunc('add', add)  

// Start the service

In this snippet, we first initialise the api with the project name and Space Cloud url. Then we create a service by the name arithmetic. A service is nothing but a collection of several functions.

Inside the service, we register a function by the name add which returns the sum of the two numbers and then start the microservice.

So how do we invoke this function?

// Initialize api with the project name and url of space cloud
import { API } from "space-api";

const api = new API("demo-app", "http://localhost:4122");

// Call a function 'add' on 'arithmetic' running on backend
const res = await'arithmetic', 'add', { num1: 10, num2: 5 })

console.log( // Outputs -> { result: 15 }

Everything pretty much remains the same with one minor difference. We are calling the method to invoke our function. This is pretty close to invoking a function in the same program.

Excited about it already? Checkout our quick start guide to get started right away.

We could have multiple instances of space cloud running parallely. Also have 100s of microservices running on different nodes. Each microservice could potentially have multiple replicas as well. But at no point do we need to worry about networking.

As long as we know the address of any one Space Cloud instance, we will be able to invoke any function on any service. Space Cloud takes care of load balancing and service discovery.

And since, all communications are flowing through Space Cloud, it becomes super easy to control access and monitor the runtime metrics of each function invocation.

So isn’t Space Cloud the Bottleneck?

Fortunately, no!

In such a setting Space Cloud serves as a very thin layer providing authentication and authorisation.

Most of the networking jazz is delegated to a Nats cluster running under the hood.

But isn’t this similar to AWS Lambda?

In terms of API and ease of use? Absolutely yes.

In terms of architecture? Not really.

AWS Lambda is a function as a service. The function is spawned whenever a request is made. Nothing is long lived and it comes with several restrictions like the databases and libraries you can use with it.

A Space Function runs as a microservice. Let me rephrase, it is a microservice. The function is a callback we invoke whenever a request is received by Space Cloud.

In other words, your function has no timing restrictions. Wanna take hours to execute? No questions asked. You could even maintain a database connection pool to maximise performance or train a tensorflow model if you wish to.

Get whatever library you want. Use the same package manager you love. Dockerize it and ship it to Mars!

I’m sure you’d defineltely want to take it for a spin by now!

You can call your functions from other microservices, or even from your frontend. Space Cloud has a robust security module to provide the access control I spoke about earlier.

Wrapping Up

Microservices are undoubtedly a well suited approach to tackle agility related problems at scale.

As I mentioned earlier, I absolutely love microservices. In fact, we have adopted a microservice based architecture at Space Up Tech. It’s another thing that we’ve built it on top of Space Functions.

If you want to get your hands dirty, do checkout our quick start guide. Super excited to know how you are planning to use it.

If you like what we are doing, do star us on GitHub.

We would love to get you onboard as well! You can join our discord server to get in touch with us directly. Welcome to the new way of writing microservices!