Transforming the Field of Embedded Vision Through Deep Learning
Last week, I had the opportunity to attend the Embedded Vision Summit in Santa Clara, California, which focuses on the latest developments in the relatively new field of embedded vision. The Summit is a three-day event for innovators who want to integrate visual intelligence into products. This year, the theme focused on deployable computer vision and its use of deep learning technology.
Embedded vision, or the use of computer vision in machines, has been revolutionized by deep learning over the last three years. In many cases, computer vision, paired with deep learning algorithms, can identify objects with as much or more accuracy than humans. This, along with the reduction in cost of powerful processing platforms and cameras, and the improvement of development tools, make it clear that cameras are the sensor of the future. In the next three to five years, there will be rapid adoption of cameras as sensors in applications that benefit from visual information. Even applications that use non-vision based sensors today will be converted to cameras. With this transition, we’ll start to see image sensors and processors embedded into a single chip, making the embedding process simpler and easier to develop; therefore, lowering costs and reducing size.
The integration of deep learning into embedded vision systems will prove lucrative for those organizations willing to adopt it. Historically, data that’s collected through vision sensors has always been perceived as valuable, however, because cameras were expensive, they were not widely used. Now, as we begin to explore the full extent of the technology’s ability, we’re discovering there is an expansive list of problems that embedded vision and deep learning hold the potential to solve. With access to new treasure troves of data, organizations can imagine new applications and insights and even expand into new markets. The technology also gives companies the means to enhance and improve on existing products. For example, consider security cameras. Currently, a security officer must manually monitor the camera’s footage. With embedded vision and deep learning, smart algorithms can be implemented to automatically detect any unusual behavior, saving organizations time and money.
Deep learning in embedded vision is happening at a very rapid pace. As the technology evolves and becomes more widespread, abstraction will allow companies to focus their resources and expertise on higher level activities. For now, there’s still a need for those who understand the optical image sensor and core computer vision foundations. At Twisthink, we have been providing advanced vision and custom algorithm solutions for over a decade. We have integrated vision into products in a variety of application areas such as lighting controls, building utilization, vehicle safety and warehouse automation and we want to help your organization bring vision into your own products!