Taking years for acceptance
Why has it taken so long for the world to catch on to what some of us have known for over a decade? It is because in the majority of cases, new technologies that can disrupt business models take much more time to be accepted than anyone in that market wants or expects. In the U.S. market, it took cell phones about twenty years to get to major mass-market penetration. Cloud computing (think Software as a Service, timesharing) can trace its origins back many, many years.
Packaging the technology
in order to get across that chasm between the early adopters and the early majority, you need to package that technology into something that is easy for companies to consume. This means not only offering a complete solution, but offering it from brands that are trusted and companies that provide global scale.
Machine are becoming intelligent
That gave us the idea to call our book The Silent Intelligence, because we feel like machines and things around us are becoming more intelligent and are doing so silently.
In a nutshell, the biggest benefit of the Internet of Things is that it gives us a unique opportunity to talk to the analog world around us (machines, people, animals, plants, things) in a digital way, with all the benefits of digital communication — speed of light, easy multiplication of data, and easy integration with other digital systems. All this, combined with wireless communication, produces an effect of machine telepathy, a condition where things can communicate over large distances unconstrained by wires.
People didn’t see the need to have them in their homes until there were initial killer apps like spreadsheets and word processing. That helped to get the adoption started, but that’s not why people have PCs anymore. Phones went through the same cycle. The killer app that caused the adoption was voice communication.
De-wireization: More and more things are becoming wireless, which means they can be located anywhere. The growing ubiquity of cellular and Wi-Fi networks has driven this trend. The last wire to disappear will be the power cable, driven by advances in wireless power and power management.
In 1991, Mark Weiser, chief technologist at the Xerox Palo Alto Research Center (PARC), published an article on ubiquitous computing, which opens with these sentences: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” The article laid the foundation for many subsequent visions, resulting in the development of RFID, smartphones, and M2M solutions.
All the stars are aligned for us in that the modules that wirelessly enable these things have come down in price. The networks are there to support these and all the economics are coming into an alignment so that it makes sense to have these devices connected.
“Once open connectivity interfaces are in place, service innovation will follow. There will be a shift from asking, ‘Do we need to connect it?’ to, ‘What can we do differently now that it’s connected?’”
M2M technology ecosystem can be split into three major groups:
In the cellular world, where the bandwidth is scarce and relatively expensive in large volumes, it is crucial to use it efficiently. In most cases this implies making the device smarter so it can decide whether or not to send data back to the cloud.
I think it really comes down to the ability to embed connectivity and get those things to market easier and faster, without taking a year. That’s the long pole on the tent, and it is on the device side.
Over the air upgrades
It’s normally not on the software side or on the connectivity side; it is, how do I get that device manufactured to scale and be able to update it over the Internet and do those kinds of things? That’s the biggest area for improvement that will really get more products to market easier and faster.
One of the most critical parts of the electrical design is antenna design. The improvements in antenna design over the past twenty years have been truly amazing. Some of us remember the early cell phones with pullout antennas. Over time, antennae became hidden inside the actual device, and later antennae have been integrated into the bodies of smartphones. In the meantime, devices got much smaller and complex as more and more antennae had to be integrated within the same small footprint.
Various radio frequency support
For example, an average smartphone today must have antennae supporting the following radio frequencies: CDMA, GSM, UMTS, LTE, GPS, and Bluetooth.
Cables + autonomous power
Wireless devices solve one problem but create another. Removing all cables from the device allows for easy installation. But in most cases, a device needs to have autonomous power in the form of a battery as well.
Sleeping mode + harvesting energy
There are many ways to extend battery life today. One way is to put the device in a sleeping mode when it’s not transmitting data. Another way is to harvest energy from solar, wind, vibration, heat, or other alternative sources that exist where the device is in use.
A lot of data generated
It is highly likely that the volume of data generated by these other devices will dramatically exceed the volume generated by smartphones and tablets. While we are going to be watching high-definition videos on our tablets, all these devices are going to be quietly sending alerts, syncing data, and requesting status updates, which in aggregate will amount to terabytes of data.
Global wireless connectivity
Having global access is also critical, because no OEM would want to build different versions of devices for various markets or put different SIM cards into the devices depending on which regional markets they go to.
Inter-operability with SIM cards
It used to be a very, very difficult thing to deliver a single SIM. Now, you put the same SIM in every single device you build and you have one carrier to work with to have coverage in two hundred countries around the world. So the single SIM simplifies device manufacturing, the single SIM allows you to ship that automotive device anywhere you want to go, and when it lands it gets turned on automatically.
What else needs to happen?
While there are a lot of things that have already happened to simplify and enable the Internet of Things, a lot of things still need to happen. Among those things are integration, standardization, simplification, what to do with all the data produced by all the devices and sensors, and how to turn that data into knowledge so it can affect decisions and processes within companies, to support and improve existing processes.
Building Infra right now
I think right now people are just putting in the infrastructure. I don’t think people have figured out, in many cases, what to do with the data. Maybe they understand the primary use of the data: By monitoring a truck location I can increase my supply chain efficiency.
Cons of RFID vs cellular
That way, RFID provides a high level of granularity for seeing and detecting things. However, for each RFID installation there needs to be RFID readers, which require installation, calibration, and fine-tuning. The RFID does not provide the ubiquity of other networks. Let’s take cellular: Mobile carriers provide access to cellular networks almost everywhere — cellular networks are ubiquitous.
Challenges in RFID?
Sanjay pointed out that the biggest cost in RFID today is in installation and integration. One way to address the issue of ubiquity, he says, would be to wirelessly enable RFID readers with 4G or Wi-Fi and make them ubiquitous.
Which type of wireless technology to use?
All in all, wireless networks are a very important consideration when building an M2M business. Depending on the individual use case, M2M modules have to be able to receive satellite signals in remote areas, or use Bluetooth or Near-Field Communication (NFC) for short distances. Telecom carrier networks and GPS are by far the most dominant modes of transmission; however, in high-security environments one might opt for NFC as the transmission standard.
This market always reminds us of the mobile phone industry in the early days and the fragmentation issues that there were for a number of years. If a couple of players are able to design application platforms that allow for easy and cost-effective development and deployment of applications, these platforms would be able to benefit from massive network effects, like the ones we have seen around mobile platforms starting in 2006.
Weak human + machine + better process
Garry concluded, “[W]eak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”
We could track a FedEx package in real time much better than we can today, when it only gives us milestone updates. A FedEx package will become a smart package and will know not only its location at any given time, but also if it’s running late and if the contents of the package are safe by collecting contextual information about the package.
Visualizing things never seen before
Astro Teller says, “We could see things about people’s commute cycles, about how long they commute and how it relates to their sleep and other things. No one has ever seen things like this on a hundred people or even on a million people.”
Standardised and cloud
Fundamentally, I believe IT integration is going to disappear. What is going to happen though, I think, is that everything is going to go to a cloud, like a Salesforce.com model. Things are going to become much more standardized, much more like take it or leave it.
I think that outsourcing innovation actually ensures security, because you have a few professionals that have best practices. I’d rather trust a thousand people at a company like Amazon Web Services than the four guys in an IT division who have been working there for twenty years. The cloud companies will have the latest enterprise software on their machines, including the latest security updates.
3D printed electronics?
While 3-D printing is an attractive concept for mechanical parts, interesting things are happening with electronic parts as well. A lot of innovation is happening with inorganic printed electronics. According to Sanjay Sarma, we are probably seven to ten years from being able to 3-D print electronics that will be used commercially in M2M devices.
Scaling software vs hardware
Of course it is much easier to scale software than to scale hardware. In the phone and in the PC space, there were a lot of different hardware manufacturers. We forget their names now, and most of them fell away because they could not scale. But at the time, it wasn’t clear who was going to win.
APIs from device manufacturers
As soon as the APIs from a few of the device manufacturers are good enough, you will see essentially no more new entrants into the hardware market unless someone has a radically good idea and wants to be the best in the world of devices. All the new entrants will start to be software-only entrants and will ride on top of the existing hardware, and then there will be a race between the fifty companies that have something you can wear and, over time, all but three or four of them will fall away, which is exactly what happened to PCs.
they often come at you from those outlying areas affectionately called “left field.” Ironically, the more successful you and your company are, the more likely that you’ll miss those seemingly orthogonal ideas. Success can be your worst enemy.
What is your M2M strategy?
The question corporate executives were asking at the time was, “What is your digital strategy?” Today, as we approach the last mile of connectivity with the physical world around us, the question is, “What is your M2M strategy?” M2M is going to affect every company on the same order of magnitude as the Web did in the ’90s.
City and networks
The city is becoming like a computer in the open air. All the networks — telecommunications, transportation, energy — are getting digitized for better management. But at the same time, you can think of them as a huge nervous system that tells you a lot about what’s going on in the city in real time.
In reality, the promise of capturing and processing information in real time from various sensors around the city — transportation, pollution, waste management, energy, and so on — is very powerful. It has the potential to democratize information and give more power to citizens. And with that democratization and sharing of information and responsibility, a lot of things become possible.
All of a sudden, we’re almost going back to old days of ancient Greek democracy with empowered citizens. We now have a capacity to influence, design, make decisions and suggestions about life in the city by learning about how our city and our environment is managed.
It would ensure that minorities get as much service as majorities do, that budgets are allocated correctly, that the experts and planners use their expertise for the public good. There’s room for this new citizenship. There’s change in government and there’s change in the functionality of the city. There’s also change in business models. New business models become possible when you start to connect so many things together and put people in between.
if you generate excess energy on your rooftop solar panel, it can be stored in your neighborhood’s energy-dense capacitors and batteries, and then sold to and consumed on demand by the neighbors.
M2M technology is critical for the manifestation of the idea of microgrids, where energy is generated when the sun and wind are available and then locally consumed on demand. Knowing in real time how much energy has been produced, how much excess energy is available, and how much is being consumed are absolutely critical to ensure an uninterrupted flow of electricity.
Value of information
All major appliances are going to be connected, and the value of the information from major appliances will be worth a lot for those appliance manufacturers. Not only will they discover usage patterns, but also ways to better manage marketing and supply chain around parts replacement by staying connected with their installed base.
Technologists often forget aspects of human behavior and its adaptability. Ultimately, connectivity is supposed to improve the quality of life and the standard of living, just as home appliances did fifty years ago.
Designing better appliances
According to Panos, non-intrusiveness and immediate value are critical for the adoption of connectivity in the home. For example, if my appliances start telling me how I can use them more effectively to achieve better results and save electricity, I may be prepared to pay more for them. In the longer term, appliances would learn how people live and behave, which would give manufacturers a wealth of information on how to design better appliances.
Better medical care
From tracking hospital assets and patients with RTLS and RFID, to replacing bulky wired body sensors with unobtrusive wireless ones, to remotely monitoring hospital equipment, the world of health care is in for some dramatic changes. The other major group to be affected is consumer medical devices: Soon, they will be able to give consumers tools to manage their own health by better understanding their lifestyle and what is happening in their body between doctor visits.
For example, it can get you from Palo Alto to San Francisco during rush hour within five minutes of the ETA. But it still requires physical user input. What if we take this concept a notch further and imagine cars automatically populating an application like this with real-time data like speed, acceleration, weather conditions, and more? Without any physical user intervention we would have very accurate maps of traffic, as an example.
Solve a problem
Usually, it’s easier to come up with a use case that solves a well-known (old) problem in a new way (through technology), as just described in the asset-tracking use case. It’s much harder to come up with a new use case that solves a lesser-known problem, like what we sometimes find in healthcare. But companies, especially start-ups, do it all the time. Sometimes they win, more often they lose, but in the end, the new use cases are the ones that open tremendous new opportunities for value and growth.
Making sense of data
Also, no doctor wants to see the conductivity of your skin thirty-two times a second for the last two years. It means nothing to them. You have to simplify the data in terms that will make sense to people, and different applications require different pieces. We intend to play a role in analyzing and simplifying big data. We don’t have to build all the apps; other people have our APIs and have started to build apps.
the initial consolidation among M2M hardware manufacturers started in the telematics and UBI space, where devices are becoming more standardized. For example, small, connected devices that plug into the OBD-II port of cars are becoming more similar and there are two or three major OEMs building them. Consolidation leads to larger volumes per player, which in turn leads to lower cost and ease of deployment.
In addition, wireless technology has both huge advantages and huge challenges. It eliminates wires and makes the location of connected devices a nonissue. At the same time, the position of antennae becomes a critical problem as devices get smaller and more wireless technologies are packed into one device: cellular 2G, 3G, 4G, Wi-Fi, Bluetooth, NFC, and GPS, to name a few.
We believe and predict that eventually a new type of virtual OEM for M2M will emerge that will be able to bring devices to market fast, while reusing a lot of components and thus minimizing hardware risk. These OEMs will find a way to dramatically simplify the to-market process for hardware and launch new types of devices in small volumes without incurring significant costs. They will be able to quickly iterate and integrate third-party sensors and provision devices on the network with third-party applications.
It almost never happens that whatever an entrepreneur sets out to do becomes the thing they end up doing. As Woody Allen once said, “Eighty percent of success is showing up.” Sure enough, that’s what happened to us.
In M2M it’s primarily about cost. As soon as the cost of the hardware plus the service gets to the right level, all the things you thought should happen start happening. You can basically draw a price curve of where it’s going and where it’s been and when these M2M applications will start saving you enough money to pay for themselves.
Metcalfe’s Law, which states that the value of a network is proportional to the square of the number of connected users of the system. This law is applicable not only to telecommunications networks and social networks, but also to connected things, like M2M nodes. Assuming such a network effect will take place, one can assume it will create huge value for the ecosystem.
A common hardware denominator
We feel like the hardware space in M2M somewhat resembles the computer space before the emergence of the PC (or Apple II for that matter). There are multiple form factors with different requirements, different use cases, and different software. However, once a common denominator–type of device emerges, it will take the market by storm.
Device management and new applications
Device management by itself takes you almost 50 percent toward being able to create new applications.
Opportunities are driven by problems
But at the end of the day, the best investment opportunities are going to be driven by very well-defined problems that the Internet of Things will help solve: increased visibility, increased productivity, reduced guesswork, better risk management, and better connection to our environment.