Software development has been a never-ending evolution.  Although many fundamental computer science algorithms were figured out before computers really existed, their software applications have changed drastically.  Software is nothing more than glue for a collection of algorithms, procedures, and functions. That implementation for that new idea you have will likely consist of 99% old code and 1% new code. Software developers simply piece it all together.

If software is just glue to stitch together existing components, why not let robots do most of the grunt-work for us? Why not let a single human spend a few minutes declaring what he/she wants, and let the computer figure out the rest? The Dropsource platform can crank out thousands of lines of boring, tedious code in a matter of seconds, a pace which far outperforms any person I’ve ever met.

Right now, Dropsource is capable of handling the development of software, while people take care of the design but what if the computer could do both? What if the computer could design, develop, and run the software — all on its own? What if the computer could automatically adapt to what the person needs?

Why do we use computers?

“Computer” is a loose term here – it can refer to the smartphone in your pocket, the laptop on your coffee table, or the unnecessarily expensive smart-fridge that you decided to buy at 1am on a Thursday. These computers, like with all technology, were built to make our lives easier. Technology is getting “smarter” every day, and we mere mortals are struggling to keep up.

Massive technology companies are finding new ways of understanding their customers everyday, and they probably know more about you than you know about you. We occasionally see this data applied in ways that actually benefit the consumer though, such as traffic patterns in Google Maps. Again, technology is meant to make our lives easier. The advent of the calculator eased arithmetic, the telephone allows us to communicate with people across the world instantly, and the microwave allows us to nuke our disgusting frozen pizza roll in just a few minutes.

Technology inspires us to create better technology, but as technology becomes more intelligent there may come a time when machines understand us better than we do. And at that point, could we simply let machines take the reigns in developing new machines?

But the machines would destroy humanity!

Dystopian novels and movies are aplenty. There’s no denying the risks involved, but let’s imagine a utopia for a change.

The machines understand humanity.

Like it or not, Google knows your search patterns. Google knows what you read, what you listen to, what videos you watch on YouTube, and a whole lot more. Amazon knows the products you buy and the books you read and anticipates when you’ll need more dog food or toothpaste. Facebook and Twitter understand your day-to-day lives, ranging everywhere from emotions to actions to friends to the food you like (and inevitably photograph and post on Instagram). These companies know you down to the individual level. They’ve got data on billions of people on Earth, so they know us at a macroscopic level too. They understand human patterns and can predict, with a pretty high degree of accuracy, what you’ll do next or how you’ll behave. And if they can’t do it yet, they will.

The data exists. The machines are getting smarter. And the train isn’t stopping anytime soon. But again, let’s try to have happy, utopian thoughts here. How could we apply this endless stream of data to actually benefit us?

How do we use computers?

Here, we delineate between the computers used by people and the computers used by other computers (servers). Though there’s room to discuss the latter, the former is the focus here. We use our smartphones and laptops on a daily basis.  We check our email, we tweet our tweets, we check the weather, we listen to music, and we write blog posts. It’s a never ending list of use-cases, in a never ending list of applications.

Odds are, in that ginormous Facebook app you have on your phone, you probably use a small portion of its functionality. You only really use a subset of functionality from each application; the rest is just code that never gets executed. And that small subset is highly individual; another person may use completely different parts of the application, thus a very different subset.

Furthermore, your usage of that application may change over time. Perhaps you upload more photos to Facebook than you used to, or maybe you don’t post much at all nowadays. As people and habits change, so does an application’s utility. Right now, our solution to that is to pack as much stuff into one app as we can, to cover as much ground as possible. As a result, we have bloated software, clunky interfaces, and inefficient code. Functionality is great, until it isn’t.

“Software 2.0”

With the core functionality extracted from each of the applications we independently use, we can combine them all back together into one, single application. We would define this interface to our liking, perhaps an email widget in the top left or a Twitter feed in the top right. We only show the parts we actually need though, i.e. a simple list of the last 5 emails and a “compose tweet” form. Since this is just the functionality that I need, and it’s designed with my preferences in mind, it’s all customized toward my tastes. The entire computing experience is of my choosing!

This is, or could be, “Software 2.0.” The unification of various applications into one seamless, user-specific interface, bringing the exact functionality that the user wants/needs without any of the unnecessary bloat that currently comes with it. It’s a single application, with native functionality for all of the external services baked right into it.

This could shake up how we perceive operating systems and applications. The traditional idea of an operating system is just the basic core functionality of a computer (a kernel), upon which you install a bunch of applications to make it useful. What if we instead treat the operating system and its applications as one, unified system?

The user installs this platform, and proceeds to set it up exactly as the user wants. Instead of installing several distinct applications, the user can simply drag-and-drop the pieces of functionality they want from external services into their own customized interface. The user can then change their system at will, maintaining an unprecedented level of customization.

A truly personalized software experience.

“Software 3.0”

Over time, as we learn to apply machine learning in new ways, we’ll start to see machines with the ability to construct user interfaces on their own. Specifically, machines will eventually be able to design interfaces for individual users, tailored specifically to that user’s needs. I like the color blue, and I like to have multiple screens with multiple windows in front of me at once. A computer could pick up on that and design a custom interface accordingly. Other people may like different colors with different screen layouts, so the computer could take it from there. Or perhaps, taken further, a computer may understand a user’s visual impairments, so it presents a more audio-based interface.

Adapting an interface to screen sizes has been a big thing in web development for the last decade. I estimate that the next step is to adapt the interface to the user as well. Perhaps, taken a step further, we could adapt more than just user interfaces – we could adapt an entire application. As machines understand our individual behaviors more, they can predict which features and functionalities we may need in order to accomplish our individual tasks. Maybe the system determines that you’re a big Game of Thrones fan, so it programs its interface and its functionality to draw fan information from various forums and social media and display it next to a video of this week’s episode.

Let’s take everything from “Software 2.0”, and implement “adaptability”. The system’s design, interface, and functionality would change over time as the user’s behavior and needs change. The system could stitch together components of various external services as needed, and present them in an interface that the user loves. This would be actual adaptive personal software. How cool!

How do we do it?

“Software 2.0” is on its way. Dropsource currently occupies a “Software 1.5” space. We “extract” functionality from other services through the use of SDKs and APIs. You can stitch together components from various services to develop your own, custom mobile application. You design and tailor the app exactly to your liking, and it interfaces with the other services you define. Dropsource provides a lot of control and customization, so it plays quite nicely into the notion of personalized software.

Getting to “Software 2.0” would involve applying a product like Dropsource to an entire operating system (OS), as opposed to the development of individual applications which run on a mobile OS. In this form, the Dropsource editor would become integrated directly into the OS, providing all of the drag-and-drop functionality you’d expect, but on a much larger canvas. Once you’re satisfied with the changes, they would be instantly applied to your computer.

Reaching “Software 3.0” will prove to be a difficult task at best. Although those big tech companies out there collect an unfathomable quantity of data every second of every day, applying that data is the hard part. We’ve got really cool machine learning algorithms that can accomplish some unbelievable tasks, but despite all the media hype, we software engineers are not quite as advanced as we may think.

There’s still a long way to go before “artificial intelligence” lives up to what we see on TV. But once we’re far enough along, we could train machines to use Dropsource like a normal person would. It’s not going to happen overnight, but I bet we’ll start to see some bits and pieces of adaptive software in the next decade or two. And I can’t wait!