Reading the news, it’s easy to get the sense that we are all about to walk into a 1980’s science fiction movie. Your phone tells your thermostat that you’re on the way home, so that it can adjust the temperature accordingly. Your car tells your home stereo where you were in the playlist, so that the stereo can turn on and take over. Your door recognizes you when you grab the knob and automatically unlocks. Lights come on as you walk into each room, etc., etc.

In truth, the Internet of Things might be better called the Internet of Everything. It’s easy to imagine that anything more complicated than a toothbrush is going to be connected . . . but then there was the (successful) Kickstarter campaign for a connected toothbrush, so the bar is apparently going to be even lower than that.

At the same time, the increasing pace of malicious attacks on various parts of existing computer infrastructure – be it by hackers or governments – makes one wonder just how much more integration and automation we can survive. The trend in wholesale credit card theft is going the wrong direction, as are episodes of corporate cyberattacks such as the one on Sony.

We’re well on our way to a riot of devices, functions, and interfaces, destined to be networked and used in combinations that defy enumeration, much less analysis. What kind of movie is it going to be, then: the machines serving us, or us enslaved by the machines – or the hackers behind them?

Consider the smartphone: it’s a single device, sold to you – and represented as functional and secure – by one vendor, but year after year attacks on phones are growing, not shrinking. Manufacturers (particularly of the OSes) seem to add new attack fronts as quickly as they discover (or have hackers demonstrate) the old ones. That does not bode well for the coming connected home, which will consist of dozens of devices selected from what will surely be thousands of vendors.

Security and standards are going to be keywords if it’s all going to work reliably and safely. It would seem impossible to standardize the connected toothbrush, thermostat, door, stereo, etc., and at the level of their physical operation it is, of course. But at a slightly higher level of abstraction, they all have the same needs: they have some combination of local inputs (whose hand is on the doorknob), outputs (whether to turn the furnace on or off), computations, and data that they need to communicate with the network (the toothbrush letting your dentist know it’s time for an appointment).

That level of inspection suggests an opportunity for standardization and then virtualization. We need to run code, and that code has local I/O and network access needs. According to the importance of the data being manipulated and communicated, we set the appropriate level of security. Standardizing an execution environment, virtualized above the actual hardware – whatever it may be – will deliver many benefits.

First, it reduces development cost, by reducing the number of development environments that have to be implemented and managed; along the way, it can reduce the ever critical time-to-market. Second, it reduces maintenance cost, because the software does not have to be modified at the underlying hardware evolves over time. Third, by reducing the plethora of operating environments, it reduces complexity, and thus opportunities for incompatibility and error.

Fourth, a multi-tenant virtualization solution can reduce product cost: if a smart device can rely on a dedicated home server to host its computing (or similar functionality in a router), then its own compute hardware requirements are reduced.

Finally, if that virtualized execution environment includes security functionality, it can greatly increase security: providing a single, secure execution and connectivity platform, rather than re-inventing the wheel at a number of vendors, centralizes something that really is best done as a fundamental service.

Centralizing the implementation in an industry consortium operating with an open source philosophy is a good way to get good security: hackers are going to put a lot of eyeball-hours into finding weaknesses, so the more eyes watching the development, the better. And for application functionality, open source work allows interested third parties to contribute their input and expertise as development moves into new areas.

That is the goal of the prpl Foundation. Supported by industry leaders in networking and broadband, software development, and embedded systems design, prpl has formed (and is forming) engineering groups to develop open, scalable platforms, with connectivity and security as first-principle design goals. Starting with – but not limited to – the MIPS architecture, prpl wants to provide the building blocks necessary to make sure that everyone ends up in the happy movie.