DE · Topics ·

Harnessing the Power of Sensor Fusion

Hubs, application specification and new technology help advance human-machine interaction.

The Intelligent Sensing Framework (ISF) from NXP/Freescale executes on Kinetis microcontrollers and enables embedded applications to subscribe to external sensor data, reading the data at various rates. ISF allows the microcontroller to act as a sensor hub for external sensors and manages the data for the host processor. Image courtesy of NXP/Freescale.


Peel back the layers of the technology that has brought the Internet of Things (IoT) to life, and you will find a growing network of sensing devices that take human-machine interaction to a whole new level. To take advantage of this opportunity, design engineers increasingly turn to sensor fusion with an eye on implementing greater intelligence in their user interfaces.

The challenge for developers is to create a blend of hardware and software that can efficiently and accurately aggregate and interpret the mountains of sensor data now available to consumer devices. To select the right components for their system, designers must understand the unique features of their fusion use cases. Ultimately, the fusion process aims to produce application-ready information that provides real insight into the user’s behavior and movement, thus opening the door for next-generation content and services.

MotionFusion firmware algorithms from InvenSense enable designers to integrate inputs from a 3-axis gyroscope, 3-axis accelerometer and a pressure sensor to create motion interfaces for devices, promising to reduce development costs and time to market. Image courtesy of InvenSense. MotionFusion firmware algorithms from InvenSense enable designers to integrate inputs from a 3-axis gyroscope, 3-axis accelerometer and a pressure sensor to create motion interfaces for devices, promising to reduce development costs and time to market. Image courtesy of InvenSense.

“Consumers demand ever more intelligent applications that can take natural, intuitive commands from users,” says Ian Chen, director, System Architecture, Software and Algorithms at NXP. To meet these expectations, the design engineer must be able to manage an ever-increasing number and variety of sensors and use those sensors to understand the application’s variables, environment and context.

Fusion Basics

While the basic concept of sensor fusion has long been established, the techniques that enable the process vary and constantly evolve. Generally speaking, sensor fusion is the incorporation of data from multiple sensors observing the same event, each taking advantage of its own unique perspective. The sensors may reside in different locations or sense different physical properties, so the noise captured by each sensor is independent of, and different than, that of the other sensors. As a result, information present in all the sensor data streams must then be associated with the event of interest, and information that is present only in one sensor data stream is noise. In this way, the combination of the data gleaned by disparate sensors captures the true event better than a single sensor.

One of the most common techniques of combining data from multiple sensors is the Kalman filter, an adaptive, linear filtering algorithm. There are, however, other ways of combining sensor data to better understand the underlying event. These include nonlinear methods, such as particle filters, and statistical methods, like neural networks and machine learning algorithms. All of these approaches perform sensor fusion.

The Magic is in the Software

With all these software options, the real challenge becomes identifying the best algorithm for the application. Remember, the quality of the user experience depends on how well the algorithm translates sensor data into useful, application-ready information. So designers must find the software that delivers the best data aggregation accuracy, compute efficiency and power conservation within the context of the application at hand.

To convert these general goals to actual performance, engineers must consider the nuts and bolts of the fusion process. “To select an optimal algorithm, designers must consider both the signal and the background noise of the sensing environment,” says Chen. “The signal can vary because of the installation environment, temporary unavailability of the sensor data in the case of a sensor network, or variations in the user population in the case of human-machine interface applications.”

The Intelligent Sensing Framework (ISF) from NXP/Freescale executes on Kinetis microcontrollers and enables embedded applications to subscribe to external sensor data, reading the data at various rates. ISF allows the microcontroller to act as a sensor hub for external sensors and manages the data for the host processor. Image courtesy of NXP/Freescale. The Intelligent Sensing Framework (ISF) from NXP/Freescale executes on Kinetis microcontrollers and enables embedded applications to subscribe to external sensor data, reading the data at various rates. ISF allows the microcontroller to act as a sensor hub for external sensors and manages the data for the host processor. Image courtesy of NXP/Freescale.

The application may introduce other issues that must be considered, as well, such as latencies, synchronization with other events in the application, types of error that must be accommodated, and run-time workload distribution. In the case of the workload distribution, the designer must know if the algorithm runs as part of an embedded system, in a network access point or gateway, or on servers in the cloud.

All these elements must be considered within the context of the hardware environment. “The sensor framework will need to be scalable to fit the hardware architecture of the current product, and it will need to be modular and scalable for future products,” says Eitan Medina, vice president of Marketing and Product Management at InvenSense. “This means that the algorithm must come with a complete tool chain that will allow fast prototyping, experimentation, correlation and integration into applications.”

A Starting Point

To jump-start the algorithm development process, sensor fusion providers often offer code libraries. The idea is to help designers unfamiliar with the technology to avoid having to re-invent the wheel and to minimize the time to market. Some vendors have taken this concept one step further by offering open source sensor fusion libraries. To get the most value of these resources, designers should not see these libraries as an alternative to developing their own code, but rather as a point to begin the creative process.

“Open-source libraries are like any other open source code base – it will get you started,” says Medina. “Do not expect it to necessarily meet your product goals. In fact, expect it not to. Your engineers will need to own it [the library code] to make it work for you. If you decided to do your own fusion library, the open source will get you a good starting point in your development.”

The SENtral-A2 low-power coprocessor from PNI Sensor includes a broad algorithm feature set and development framework. This combination aims to streamline algorithm creation for wearables and smartphones, providing designers with a faster way of delivering personalized, contextual experiences on devices. Image courtesy of PNI Sensor. The SENtral-A2 low-power coprocessor from PNI Sensor includes a broad algorithm feature set and development framework. This combination aims to streamline algorithm creation for wearables and smartphones, providing designers with a faster way of delivering personalized, contextual experiences on devices. Image courtesy of PNI Sensor.

The problem with relying too heavily on open-source libraries is that sensor fusion cannot be a one-size-fits-all proposition. “Today’s applications are so specialized that an unfocused approach no longer provides data accurate enough to meet the increasing demands of the marketplace,” says David Sohn, China Sales & Marketing liaison manager at PNI.

Things to Look For

Designers should remember that the use of a library—whether it is open source or proprietary—does not absolve them of the need to verify their code against the actual use case of the application. They should be sure that they understand the parameters of the fusion algorithm they get and how it is validated.

When choosing a library, here are a few issues that designers should keep in mind:

• What is the system bandwidth?

• What are the sensitivities to various input noises?

• What physical interactions are, and are not, modeled by the fusion library?

• Every algorithm makes tradeoffs between efficiency and accuracy of the model. Are these tradeoffs compatible with the application?

• With which sensor hardware and under what environments is the algorithm validated?

Tools and Testing

When choosing a development environment, be sure to select a platform that will accommodate algorithm modeling. “Focus on the development platform that will allow you to collect data for algorithm development and choose a tool chain built around a proven sensor framework, a graphical user interface and a wide set of algorithm building blocks so that you can focus on differentiating your application rather than building everything from scratch,” says Medina.

Keep in mind that ensuring that you have the right test coverage for the algorithm can be time-consuming. The effort ranges from using simulated inputs to verifying the implementation to capturing data in the real world and comparing the sensor algorithm output with the truth.

Sensor and chipset providers offer a variety of tools to help with these processes. “In addition to publishing our open source sensor fusion [Kalman filter] implementation, NXP also includes utilities for designers to generate simulated sensor data,” says Chen. “We have created a series of hardware platforms and software utilities around our sensors and MCUs to allow designers to quickly capture and visualize sensor data so that they can focus on creating value with their sensor algorithms and applications.”

The Changing Role of Sensor Hubs

Working hand in hand with the algorithms, sensor hubs represent one of the key enabling technologies of the fusion process. A specialized microcontroller, the hub helps to integrate and process data provided by the system’s various sensors, off-loading computing tasks from the main application processor and thus reducing power consumption.

Just as fusion technology has evolved in recent years, so too has the role of the sensor hub. A lot of this transformation has been driven by the proliferation of mobile devices and the growing implementation of context-aware services enabled by machine learning.

“The exciting thing over the past five years is that the term ‘sensor fusion’ is evolving in mobile,” says Jim Steele, vice president of Engineering, Intelligent Audio, at Knowles. “With the advent of machine learning techniques, context awareness of the user is becoming more and more possible. Knowing not only the location of a user, but also why he is there and what he is doing feeds a multitude of new applications.”

The “always-listening” VoiceIQ Smart Microphone from Knowles integrates an audio processing algorithm and acoustic activity detection directly into a digital microphone. As a result, the microphone recognizes when the audio chain should be awakened and when it should remain in sleep mode, delivering both power savings and effective noise suppression. Image courtesy of Knowles. The “always-listening” VoiceIQ Smart Microphone from Knowles integrates an audio processing algorithm and acoustic activity detection directly into a digital microphone. As a result, the microphone recognizes when the audio chain should be awakened and when it should remain in sleep mode, delivering both power savings and effective noise suppression. Image courtesy of Knowles.

One aspect of the rise of context-aware applications is increased demand for always-on services, which has changed the operating requirements of the sensor hub. Always-on performance often translates into greater energy consumption. As a result, fusion algorithms have had to incorporate new ways of reducing energy consumption.

This rise in the hub’s energy consumption has also been exacerbated by moves to incorporate more and more sensors into the mix. Consider, for example, the growing use of GPS and gyroscopes. While these technologies provide fast and accurate results, they also consume excessive amounts of power.

New Technologies for New Demands

To accommodate these demands, design engineers have begun to adopt new hub technologies. These take the form of processors tailored to achieve greater energy efficiency.

For example, instead of merely aggregating sensor data, sensor hubs now process sensor data to enable context-awareness. As the workload of hubs has increased, designers have turned to multi-core processors. Using this approach, a smaller, more power-efficient core performs sensor data acquisition, while a second core processes sensor data, staying in a low-power state as much as possible.

Another power-saving device is the digital signal processor (DSP). “As more complex use cases come about, the compute efficiency of a DSP is often preferred,” says Steele. “Knowles provides DSP cores with additional hardware accelerations for sensor fusion and audio processing, which is essential for the low-power, high-compute performance necessary for the next generation of sensor fusion algorithms.”

Perspectives and Tools

All these new demands on sensor fusion systems drive home the importance of understanding the use case. This includes making sure the algorithm is not wasting computing power to achieve precision where no accuracy can be gained because of limitations imposed by noise embedded in the sensor data.

An example of this can be seen in handheld applications where the natural hand tremor limits the meaningful resolution. In cases like this, designers can reduce trigonometric functions to lower order terms, using fixed-point math with a simplified floating-point library without compromising the application.

InvenSense single-chip ICM-30630 supports 6-axis motion tracking, integrating a tri-core processor, sensor framework software, a gyroscope and an accelerometer. The chip serves as a sensor hub, supporting the collection and processing of data from internal and external sensors. Image courtesy of InvenSense. InvenSense single-chip ICM-30630 supports 6-axis motion tracking, integrating a tri-core processor, sensor framework software, a gyroscope and an accelerometer. The chip serves as a sensor hub, supporting the collection and processing of data from internal and external sensors. Image courtesy of InvenSense.

It is important for designers to recognize that there is no benefit to running sensor fusion at a sampling rate faster than the application demands. Real system power reduction comes from running the algorithm only when the input from the sensors is valuable. The tradeoff here is added system latency and the risk that the lower power trigger could cause the system to miss an important event.

“Given that power consumption has become one of the primary criteria in specifying a sensor hub, designers should always evaluate sensor hub power consumption for their expected workload,” says Chen. “To help with this, some IDEs (integrated development environments) like NXP’s LPCXpresso allow programmers to estimate the power consumption of their program.”

At the very least, sensor hubs should come with a proven sensor framework and dedicated sensor application development tools. “By itself a bare-bone sensor hub without the supporting software is going to require significant software development,” says Medina. “In many cases, great hardware specs of the sensor hub—especially around power—may not matter because inefficient sensor framework software may end up consuming excessive amounts of power.”

Making the Right Connections

Another area critical to sensor fusion’s success requires design engineers to choose the optimal interface linking each sensor to the hub. Keep in mind that individual sensor types have their own constraints in terms of frequency, latency, and power requirements. So the challenge is to match each sensor’s requirements with those of the application.

With mobile and wearable applications, where battery life is a major concern, designers must minimize the static power consumption of the interface. For example, the pull up resister found in I2C interfaces draws constant power and can be a significant drawback. On the other hand, I2C requires only two signal pins and therefore is economical for the microcontroller.

The new Sensewire interface—aka I3C—being proposed by the Mobile Industry Processor Interface Alliance attempts to capture both qualities. Unfortunately, it is not yet widely supported by the hardware market.

Commonly used in industrial applications, SPI (serial peripheral interface) consumes little standby power but requires more pins.

And for automotive applications, engineers can avail themselves of specialized interfaces like SENT, PSI and DSI. The industry has optimized these to deliver robustness to designs concerned with functional safety.

Looking Forward

As sensor fusion’s role in applications expands, engineers can expect to see growing numbers and varieties of sensors incorporated into the process. Some of these may even take the form of ad hoc sensor networks.

Engineers will continue to use established software approaches, such as Kalman filters, implementing them in both traditional and emerging applications, such as in Active Driving Assistance Systems. At the same time, designers will increasingly include machine-learning algorithms to process both raw sensor data and results from sensor fusion. This will take sensor fusion into applications like context-aware computing and enable the delivery of more complex services and content.

More Info

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Tom Kevan's avatar
Tom Kevan

Tom Kevan is a freelance writer/editor specializing in engineering and communications technology. Contact him via .(JavaScript must be enabled to view this email address).

Follow DE
#15506