Edge Computing and Self-Powered Sensors
Q&A spotlight with Dr. David Wentzloff, Everactive’s Co-Founder & Co-CTO on the intersection of edge computing and self-powered sensors
Q&A spotlight with Dr. David Wentzloff, Everactive’s Co-Founder & Co-CTO on the intersection of edge computing and self-powered sensors
Introduction
Everactive Co-Founder and Co-CTO David Wentzloff sits down with Director of Product Marketing Rafael Reyes to explain edge computing and how this technology combines with self-powered sensors to expand machine learning into the industrial internet of things (IIoT).
David, thank you so much for coming today. To get us started, why don't you tell me about yourself, your current position and a bit of brief history?
Sure. I am one of the technical co-founders and a co-CTO at Everactive. I’m also a professor at the University of Michigan. My research there focuses on ultra-low-power wireless integrated circuits and low-power radios. That’s actually some of the technology that we spun out of UM and into the company back in 2012. Ben Calhoun, my co-founder, is a Professor at the University of Virginia working on digital circuits and self-powered systems. That rounds out the rest of the technology that we spun out from the University of Virginia back in 2012.
Inside Everactive, my focus still remains on the technology roadmap and how that plays into the products that we’re developing inside the company. We have some really exciting things coming for our next generations of chips!
What is edge computing? How is it different from traditional computing approaches?
The most obvious differences here are the resources you have in an edge computer versus a more traditional desktop or mobile computing environment. Let’s define a traditional computer as a laptop on your desk or a high-performance server, where you have virtually unlimited resources. You’re connected to the internet. You’ve got unlimited power. So there’s a lot you can do with that.
Contrast that to an edge computer, where the resources are much more constrained. You’re constrained in the amount of memory you have. You’re constrained in the speed of the processor. You’re constrained in the amount of data you have access to at any given time. You may or may not be connected to the internet. That’s how I would, at a high level, explain the differences.
In terms of functionality, edge computing is typically much more targeted to an application. Whereas you could run virtually whatever you want on a traditional computer, edge computers are typically going to be very specific for an application. For example, your smart thermostat is something that specifically controls the temperature in your home. This is its primary function, and for its entire lifetime is probably all it’s going to be asked to do. It’s a (edge) computer, but you would never think of checking your email on it, for example. It’s been programmed and optimized to do that one function.
The same is true for industrial IoT sensors. They’re going to be tailored towards specific applications. For example, at Everactive we have a steam trap monitoring solution, as well as a machine health monitoring solution, which are sophisticated edge computers that have been tailored to those specific applications in order to deliver a solution.
How is latency different between edge computing and traditional computing approaches?
This actually can be pretty surprising. Edge computers might be more limited in terms of the amount of resources they have – for example, they might have a slower processor – but typically edge computers will have what are called hardware accelerators implemented on them. These are used to efficiently and quickly implement very complex functions or algorithms. For example, digital signal processing can be executed very quickly and very efficiently – meaning energy efficiency – on an edge computer. So while they have limited resources, we can optimize and direct those resources to specific applications. As a result, we can actually have very low latency, and in many cases lower latency than if a general purpose computer were assigned to the same function because of the additional overhead these require.
Let's get into the other aspect of our conversation, which is self-powered sensors. Can you explain the self-powered sensor and how it is different from a traditional power sensor or node?
Sure. Industrial IoT devices are typically wireless devices. They might be powered by a battery, traditionally. The nice thing about batteries is that they produce a very nice, clean source of power – right up until the point that the battery dies. After this point, the voltage on the battery drops and the power supply is effectively gone. The device no longer operates or performs its intended function. Nor is it expected to continue operating past this point. Once the battery dies, we have no expectation that the device will keep working. This seems obvious, but it’s useful to explicitly state when we contrast this with self-powered sensors.
The operation and expectations are very different with the self powered sensor. The source of our energy is scavenged from the environment. That energy can come or go. The lights could turn on and off, for example. Or, when harvesting from heat, the temperatures could fluctuate. That means we have an abundance of power sometimes, and no power at other times. So intermittency of power is a regular occurrence for a self-powered sensor.
While the source of power may come and go multiple times in an hour, it is unacceptable for self-powered sensors to stop working every time the lights turn off. That doesn’t make for continuous, always-on systems. We’ve designed our chip, as well as our entire sense node, from the ground up, assuming that it’s going to run from harvested energy with no battery in the system, and that it must gracefully survive periods of time with intermittent power.
We do have the ability to store energy locally. We do so in a capacitor instead of a battery. Capacitors have much longer lifetimes than batteries. They don’t wear out like rechargeable batteries do. While battery lifetime is measured in years, capacitor lifetime is measured in decades.
Think about the battery in your phone – just to use an example. That’s probably the first thing that’s going to wear out – before any of the chips or the display or memory. At some point, you find yourself saying, “my phone isn’t holding a charge as long anymore.” At that point people either replace the battery or buy a new phone. The point is that rechargeable batteries do wear out. They lose their capacity over time. Capacitors, on the other hand, don’t. That means that we can “stay alive” throughout these intermittent sources of power by powering from energy stored on capacitors.
In general, the amount of power that you can harvest is pretty small. Therefore, the average power budget that your electronics need to operate within is much less if you intend to operate them continuously. So, not only do we have to deal with intermittent power, but we also require very low average power levels.
The analogy I like to use is the gas tank in your car. It’s unrealistic to assume that you would carry around enough gas in your car to run your car for its entire lifetime – let’s say 100,000 miles. That would be a huge gas tank! Instead, you have a smaller tank with enough gas to go 300-400 miles, and you go to the gas station periodically to fill it up.Well, your typical non-rechargeable battery operated device does carry enough energy to operate for its entire useful lifetime – until the battery dies. To go back to the analogy, it’s carrying enough gas to run 100,000 miles. So it needs a really big tank. That’s why we find that if we open up a battery-operated IoT sensor, typically, the volume inside is dominated by batteries. That’s because you’re carrying around enough energy to keep the device alive for its entire useful lifetime. For self-powered sensors, our “gas tank” is the capacitor, which holds less energy but also has a much smaller volume. Our lifetime is not limited by the amount of energy the capacitor can store, since we are continuously replenishing it with harvested power. The size of a self-powered sensor can be much smaller too, because there is no lifetime requirement forcing the use of large batteries.
So to put these two together: how does edge computing work in self-powered sensors?
For starters, self-powered sensors are always on, always monitoring, and always generating data. This means we have a new, continuous data stream flowing that is rich in information, but that information needs to be extracted by using some kind of post-processing.
One approach would be to ship all that data to the cloud and do the post-processing there. But as we discussed earlier, there’s some latency associated with this. Also, a lot of energy has to be spent to transmit all of that data wirelessly. This solution also doesn’t scale well to thousands or hundreds of thousands of sensors in a facility. Because if every sensor is trying to stream raw data to the cloud, you’re going to have congestion issues in the wireless network.
Edge computing can process that data in real time, directly at the source of the data. Again, talking about the latency and the efficiency of these hardware accelerators – we can process this data as we’re sampling it and extract just the useful information that is more important for making a decision. Then we send just that information to the cloud, effectively functioning like a compression of that data stream. In addition, we can do things like trending that data over time, or can apply additional algorithms on top of it. But the point is, we’ve done the bulk of the processing on the data at the edge.
That’s one aspect. The other is that there are new types of processing coming. You have certainly heard of artificial intelligence, or maybe you’ve heard of neural networks, which is a form of artificial intelligence that is easily implemented in hardware, such as in a chip like the ones that we design at Everactive. This type of artificial intelligence – at least some parts of it – could be, again, very efficiently implemented at the edge in an accelerator.
We’re not really constrained in the functionality there of our edge computing platform. The sky’s the limit. There’s a lot of very exciting opportunities, as far as pushing the front end of these neural networks or artificial intelligence engines out to the edge, so that we can have very smart pre-processing and decision-making on our edge devices, which can then leverage information from the cloud to train so that these can be exceptionally smart neural networks.
Why is doing the neural network at the edge better than implementing it from a central server?
That’s a great question. Let’s talk about artificial intelligence and how we go about implementing this. I’m going to break it up into two components: there’s (1) the training component, and (2) the implementation or execution component.
The training component is where we’re teaching a neural net how to interpret data that it’s looking at. I can train a neural net on vibration data from a motor by showing it lots of vibration data and telling it which sets of vibration data is a healthy motor, and which sets of vibration data is from a motor that needs maintenance. By doing that, I will train the neural net. In the second component, we take a measurement from a specific motor and process this through the trained neural net which makes a decision quickly on whether the motor is healthy or needs maintenance.
With training, you benefit from having access to lots of data from lots of motors. For that, it’s beneficial to have access to the cloud, which provides access to lots of data. Therefore we would normally think about training a neural net in the cloud. But once you’ve trained a neural net, you can then push that result down to the edge for the second execution component. You can take advantage of this wealth of information that you got from training, then you can run it very efficiently in your edge computing platform and make decisions immediately at the edge.
Like you’re sending your AI to college at the central computer server, and once they graduate, then they go work at the edge sensor.
That’s great. That’s perfect [laughs].
You were talking about the advantages. Are there any disadvantages of edge computing in self-powered sensors that you can think of?
Anytime you don’t fit the model I just described, you will have disadvantages. For example, if you’re disconnected from the cloud – if you’re on a local private network, you can’t easily take advantage of that wealth of information. You can’t go to “artificial intelligence college.” You’re just limited to the data that you can sense locally. These things [neural networks] really benefit by training with lots of large datasets. Being cut off could limit the capabilities there.
And there’s the way in which we implement our hardware accelerators. Today, we’re doing these neural nets in hardware, which means we make a decision on what the parameters of those hardware accelerators are when we’re designing the chip. Given today’s needs or anticipating what future needs might be, we’ll implement those into a chip. Anything that doesn’t fit that mold, you could still execute in software on a general purpose processor. On the edge, you can still do it, but it just might not be as efficient if you’re not taking advantage of those hardware accelerators. It’s perhaps some future algorithm that we weren’t anticipating or doesn’t directly match. Those are certainly things that we’re looking out for.
The nice thing here is, when you get down to what these algorithms actually do from a mathematics perspective, it’s a very small set of functions that are being used over and over again in order to implement these algorithms efficiently. And for that reason, we can actually cover a very broad set of algorithms with just a small set of hardware accelerators.
So what can you tell me about the uses, where you have observed edge computing applied to self power sensors?
Most of these applications are going to be on the sense side, not on the control side. The wide range of these applications is going to be adding new sensors, perhaps where you haven’t been monitoring before. They will produce continuous data streams, which have sensed information, as opposed to the control side, where we’re actuating: like opening a valve, for example, or turning the machine on or off, or applying the brakes in a car. Some of these control systems might be life-critical systems. Today, an untapped area of self-powered edge computing exists on the control side, but there’s a rich and growing set of applications on the sense side.
You anticipated the question ‘why not the control side’ with your example of breaking a car. But as AI develops the ability to make decisions at the edge, can we be more confident that the answer will be right?
We’re probably going to get there slowly over time. A lot of control systems are incumbent systems. They’re hardwired. They’re closed systems. They weren’t developed for this type of distributed technology. But one nice thing about artificial intelligence is that a lot of the time it can see things that the human eye can’t. It can pick up on things that perhaps might go unchecked otherwise, and then it can use that quickly in feedback loops to control other actuators. Not only will AI have an impact on these closed loop systems, I think they can actually make them more efficient, faster, better.
But like I said, it’s going to be an adoption over time. Right now we’re watching the adoption of autonomous vehicles play out like a slow motion movie, right? We started with anti-lock brakes or cruise control. Then moved to adaptive cruise control and automatic emergency braking. And a lot of people now trust that the machine is going to do the right thing in these scenarios. But in the beginning, there’s some hesitation to adoption.
Now fast forward to self-driving cars. Today there is a person in the loop. Some of these cars require you to keep your hands on the wheel when it’s in self driving mode. Others will let you take your hands off. But for some people, they prefer to keep their hands on the wheel, especially when you’re going around a curve or you’re in some tricky situation. They don’t know if they can trust the algorithm just yet. But eventually, algorithms will get smarter, and people will get more comfortable with the decisions that these control loops are making in real time.
So translate that to industrial IoT applications. We may start with a system where there is a human in the loop who approves any decisions before it gets turned into an actual control of equipment. Eventually, as the comfort level grows and the accuracy of those control systems grow, we’ll see the human in the loop slowly be removed.
What is your overall assessment of the state of edge computing and self-powered sensing? Is this a growing trend, slowing trend, exciting opportunity, or a terrifying threat?
First of all, there are not a lot of self-powered systems out there. To our knowledge, Everactive is the only solution that’s deployed in industrial environments today. So if we narrow it to self-powered systems, it’s an exciting, growing trend, but Everactive is really fueling that.
Edge computing as a whole, though, is absolutely a growing trend. We see this being leveraged pretty much in every vertical: anywhere that you can add more intelligence into the edge – ubiquitous sensing or compute that kind of disappears into the walls – that’s a growing space, absolutely.
The IoT is probably going to be what drives the semiconductor industry for the next decade or two, meaning in volume of chips. The majority of the chips that the world is going to produce and buy are going to be for edge compute devices, not cell phones, laptops, or servers.
Which means there are going to be more specifically designed chips to perform limited functions, but maybe faster and more efficient.
Right – optimized for these edge compute applications. Which doesn’t necessarily mean limited capability. It actually could offer better capability for what it does, because it’s optimized for that, as opposed to general purpose computers, which are usually optimized for nothing, because by definition they’re “general purpose.”
What might be driving self-powered sensors and edge computing to coincide and unite?
The advantage of a self-powered sensor is you can put it anywhere. You can scale them to large numbers, because you don’t have to worry about maintenance. That means you’re going to be producing a lot more data than you had before. Something is going to have to act on that data: processing it, storing, making decisions, or extracting information from it. Something’s going to have to be dealing with these new data streams that didn’t exist before, but now exist in large quantities.
Each one is feeding the other. It’s a symbiotic relationship. The self powered sensors are producing the data, and we need edge compute to deal with that data. But because we have very sophisticated edge compute capabilities, we can now dream up all kinds of really cool applications for self-powered sensors, right? And it’s because those sensors are smart. They’re not just generating data and streaming it to the cloud.
What is providing value when it comes to edge computing and self power sensors?
What is certainly working, what is generating the most value, are applications where self-powered sensors open a new window into a space that you previously had no visibility on. That allows you to make better decisions, do better planning, better scheduling, or just maintain that equipment.
When equipment fails, it can be very costly. Certainly, this is true in the case of steam traps. When a steam trap falls open, steam is literally being poured into the drain. It’s very expensive. It does not stop production. It does not bring a facility down generally when that happens, but it’s very costly and typically goes undetected. By adding visibility on every single one of your steam traps, you can now detect immediately when one of them fails. Go repair it and you save what would have been lost money.
Those applications certainly get people interested, because it does generate a hard return on investment. But I think what’s really exciting here is that these applications are built on top of a self-powered platform. That platform can be used for a wide range of applications with minimal or possibly no changes to the hardware, or maybe even the software that’s running at the edge. While we are currently targeting very specific applications to develop our platform, I think that we will be able to point the technology to new solutions, new problems, and new pain points much more quickly in the future.
What are the challenges that you have observed when it comes to the adoption of edge computing in self-powered sensors?
That really comes down to what application we’re going after, and what the requirements are there in order to observe the certain features that you’re trying to extract.
I’ll give you an example. If you just want to know whether the temperature of something crosses 100°C, that’s a very simple edge computing application. Just monitor the temperature until it’s greater than a value and generate an alert.
Contrast that with detecting motion with a camera. I’m asking a computer to look at an image over time, and tell me if anything has changed. We can do things like that at the edge, but it requires quite a bit more processing.
The application really drives what the edge computing needs are. It comes down to matching these capabilities and their needs. That’s where the low-power chip technology comes in, because it allows you to execute very complex algorithms efficiently and quickly, with low latency using hardware accelerators.
So let’s say that we have adopted these devices with self-powered edge computing. What new challenges are generated when you actually put these systems into practice?
In industrial applications, one challenge is just adoption and use of a new technology. Change is always difficult for people. Changing your processes or how you conduct your day-to-day business, that adaptation can be a challenge. Sometimes that’s our biggest competitor: status quo. Not some other competing technology, but the difficulty of changing a process, like how we monitor things or how we schedule service on our systems. That’s certainly one challenge and maybe the biggest one we deal with in the industrial space.
How do you see the role of governments and regulators in relation to the adoption of edge computing and self-powered sensors?
We’re seeing environmental regulations pressing facilities to become more efficient, identifying when there are leaks, such as methane leaks. Alternatively, we’re seeing some local governments give incentives for monitoring – for example, identifying failed steam traps and repairing them. Because when you monitor, you stop losing steam, which means you stop over-producing steam, which means you stop over-consuming natural gas, which means you stop over-polluting CO2. There’s a chain effect here and governments have absolutely picked up on it.
In the future, something that will definitely impact industrial monitors and edge computing will be how could this data be used or misused? We haven’t really seen much of that, at least that I can point to, for industrial IoT sensors. But there’s certainly a corporate citizenship or responsibility of companies that manage large amounts of data. That’s something that we take really seriously at Everactive. And we subject the company to two external audits on our security protocols and how we handle, transfer, and manage data. It’s something that I think everyone should take seriously.
To wrap this up, why will the industrial sector need solutions that work at the confluence of edge computing and self-powered sensors?
For industrial environments, it’s sensing where you haven’t before, and then it’s extracting information from that sense data using edge computing to then generate actionable insights. It’s not just looking at raw data, although that can be helpful. The end game is extracting insights, which you can then take action on.
David, thank you very much for your time!
For sure I really enjoy the conversation