Waiting for The Next Big Thing

I spent the afternoon in a bookstore. There were no books in it. None had been printed for nearly half a century. And how I have looked forward to them, after the micro films that made up the library of the Prometheus! No such luck. No longer was it possible to browse among shelves, to weigh volumes in hand, to feel their heft, the promise of ponderous reading. The bookstore resembled, instead, an electronic laboratory. The books were crystals with recorded contents. They can be read the aid of an opton, which was similar to a book but had only one page between the covers. At a touch, successive pages of the text appeared on it. – Stanisław Lem, “Return from the Stars”

This is how a sci-fi writer in 1961 imagined the book reading experience of the future – with surprising accuracy. How do you think software development will look like in the future? Can we be as accurate? Let’s consider what The Next Big Thing in programming may look like.

The History

I like to think about developers’ productivity as a number of things that they don’t need to worry about. The big steps of programming actually consisted in adding layers of abstraction for developers that could let them forget what sat below. Think about the productivity boost between assembly language and C, then C++. It allowed us to depend on abstraction of the processor. Then .NET, Java, Python, Ruby added another abstraction layer that does a pretty good job of managing memory and allows to forget about the machine or even the OS layer below. Even if we leave binary machine code out of the picture, the productivity boost is huge.

Every time I can forget about managing a layer and focus on the task at hand, my productivity increases.

You can observe adding more layers of abstraction. For 3D programming, OpenGL and DirectX enabled us to nearly forget about the underlying hardware. With SQL, I can describe the results I need with limited understanding of how they are pulled out from the hard drive.

The common ground of all the examples above is that as a developer, I can stick to a single layer of abstraction.

Working on the same layer of abstraction allows me to focus on my goal and leave the development of underlying layer to someone else (who does it better than me). The language or driver developer does the work, but they work in their single layer of abstraction as well.

The Now

From the perspective of the examples above, the jump from assembly language to C++ is pretty huge. So is the transition from C++ to Ruby. However, looking from a wider perspective, we can clearly see that Java, .NET, Ruby, Python, Groovy and others allow to express the same things using different words. Neither of them generate a productivity boost compared to moving from C to Python. Even Go, Scala and Kotlin are just adding more colors to the same picture. They’re still using very similar constructions to tell the story.

Still, you may observe some evolution in different areas. For example, look at how GUI frameworks have evolved in the last few years. You’ll observe a common shift from the imperative programing model (like Swing) to a declarative method. ASP .NET, Apache Flex, JSF, JavaFX, Angular, Polymer etc. GWT UI Binder or Vaadin declarative syntax are good examples of the imperative to declarative transformation. It looks like the XML-based language (or JSON for ExtJS) works pretty good and in the long run, it is even more effective than WYSWIG UI builders.

Adding a layer of abstraction allows to forget about the “how” to build the UI and focus on “what” in a declarative way.

On the other hand, now you’ll notice that the same patterns are reinvented, without bringing any new layers of abstraction. Just compare sample code of JSF PrimeFaces and PrimeNG. The differences are only syntactic. Angular isn’t either less nor more complicated than JSF and even suffers from the same pattern-related problems from time to time (see 2010 c:if problem vs 2016 ng-if issue). Patterns of communicating between components on the UI were well designed when implementing GUIs for IntelliJ IDEA or MS Office and these patterns should still work in other technologies.

On the server-side the most recent tool to improve productivity is probably a dependency injection container. It allows me to almost forget about how the components are wired together and focus on business functionalities. This is just one thing to stop worrying about, but still an improvement. Using a declarative language server-side didn’t turn out to be a boost. Anyone remembers Adobe ColdFusion?

Does an attempt to add an abstraction layer by running an Enterprise Application Server actually do the trick? In theory, the goal is to allow you to forget about the framework, tools, libraries and runtime, and just put your .ear file on a web server cluster. Has it ever worked for you like that? I observed that in most cases, it forced me to learn the underlying architecture, toolset and runtime really good to effectively develop for JEE server. Do I need to understand ASM to develop in Java? JMS, JDBC, JNDI, JNA, JTA, JAX-WS etc. mixed into the deployment tool actually adds constraints and does not make things easier. Why is that?

JEE server mixes the deployment and framework layers of abstraction and won’t let you forget about any of them.

For me Spring Boot did a better job at leaving deployment to a deployment tool (meaning: forget about it) and building the application as single .jar file. If I need to worry about the framework anyway, let’s not add another middleware layer. Spring Boot does a good job in wiring things together using the DI pattern.

The Future

So how would you imagine the Next Big Thing then? Is it another language, framework or just a tool?

Here is how a server-side Next Big Thing would look like for me:

  • Serverless. The Next Big Thing allows me to execute my code on a virtual runtime abstracting out the machines, servers, containers and cloud to build general usage web applications. I don’t even bother how many servers are being used just as I don’t think too much about individual CPU cores right now.
  • Simple. As a young developer, I can learn the Next Big Thing and start developing new application just like I start a new Ruby on Rails or Spring Boot project. I don’t need to learn Java, Go or whatever is under the hood. I just learn the Next Big Thing.
  • Abstract. I can develop in the Next Big Thing without deep understanding of any underlying tools related to servers communication, deployment, containerization etc. The runtime is “invisible” to me and my code is “just executed” in distributed environment that scales automatically. I don’t need to leave this layer of abstraction.
  • Scalable. When working in the Next Big Thing, I don’t need to understand horizontal and vertical scaling. The virtual runtime just “grows” where needed.
  • Distributed. The fact that two units of code are executed on the same machine or different ones is transparent for me. It’s just running in the cloud virtual machine. Service boundaries are defined just like interfaces in OOP.
  • Testable. I can develop and integrate test modules of my application written in the Next Big Thing without setting up any complex local environment. It is just as simple as installing JDK.

Does it look complex? The architecture of a modern multi-core CPU or using a GPU for computation is complex as well. Memory management with GC is complex. However, right now there are runtimes that successfully hide this complexity and allow using it with a nice and easy language.

Just like the mentioned evolution of different GUI frameworks towards a common direction, you can observe the evolution of server-side frameworks. Take a look at Vert.x, Akka, Apache Spark, Apache Storm. These distributed computation engines usually focus on data processing and real-time computation, but look like the first puzzle to the Next Big Thing. Lagom seems like a step forward by combining Akka and Play to be more general usage microservice oriented. But it still does not hide the layer of abstraction.

On the other hand, consider the eruption of cloud solutions such as Docker, Kubernetes, CloudFoundry – they may be considered the initial building blocks of the Next Big Thing. The speed that cloud computing evolves is outstanding. Recently, the FaaS concept allows to break with the concept of a “server”. Check out AWS Lambda, Azure Functions, IBM OpenWhisk or Google Cloud Functions that allow you to deploy pieces of code “without” a server. AWS Step Functions allows building workflows this way. Currently, the complexity of such an approach to building complex business apps might be overwhelming, but serverless computing is predicted to be the future of enterprises already. The rise of machine learning services (again Google, Amazon and Microsoft) will only boost the evolution of distributed computation engines.

I imagine that my code is distributed on a “swarm” FaaS instances that can grow and shrink seamlessly (without me knowing that). The code execution paths look like regular code (either imperative, reactive or something new) but cross the machine boundaries from time to time. At the same time I don’t need to think about shared state because the Next Big Thing handles it for me just like GC does the work in JVM. I just press the “run on cloud” button.

I don’t know if the Next Big Thing will be a new language or one of the existing ones. I’m not sure if it will be more Scala-like or similar to Clojure. It might be reactive or actor-based, but totally hiding that fact under the hood – so I will be able to focus on business features using simple verbs. Whatever it turns out to be: when looking at the toolsets that are being created and the speed new technologies are emerging, I can’t wait to write my TODO list application in The Next Big Thing.

Comment Area

  1. Steven Thornton28/04/2017

    If directed and popularised correctly, the next big thing that you speak of could be Eve! A wonderful abstraction that allows the user (not programmer) to invent solutions.

Hybrid and multicloud

Learn how cloud and multicloud drive transformation!

Download now