Tuesday, 19 May 2020

Sony, Microsoft Partner to Deploy AI Analytics in New Image Sensors

Last week, Sony announced its IMX500, the first image sensor with an onboard DSP specifically intended for AI processing. Today, it announced the next step in that process — partnering with Microsoft to provide an edge processing model.

The two firms signed an MOU (Memo of Understanding) last week to jointly develop new cloud solutions to support their respective game and content-streaming services, as well as potentially using Azure to host Sony’s offerings. Now, they’ve announced a more-specific partnership around the IMX500.

Microsoft will embed Azure AI capabilities into the IMX500, while Sony is responsible for creating a smart camera app “powered by Azure IoT and Cognitive Services.” The overall focus of the project appears to be on enterprise IoT customers, which fits with Microsoft’s overall focus on the business end of the augmented reality market. For example, the IMX500 might be deployed to track inventory on store shelves or detect industrial spills in realtime.

The Sony IMX500 (bare chip, left) and IMX501 (packaged model, right).

Sony is claiming that vendors will be able to develop their own AI and computer vision tools using the IMX500 and its associated software, raising the possibility of custom AI models built for specific purposes. Building those tools isn’t easy, even when starting with premade models, and it’s not clear how much additional performance or capability will be unlocked by integrating these capabilities directly into the image sensor. The video below has more details on the IMX500 itself:

In theory, the IMX500 could respond more quickly to simple queries than a standard camera. Sony is arguing that the IMX500 can apply image detection algorithms extremely quickly, at ~3.1ms, compared with hundreds of milliseconds to seconds for its competitors, which rely on sending traffic to cloud servers.

This is not to say that the IMX500 is a particularly complex AI processor. By all accounts, it’s actually mostly fit for smaller processing tasks, with relatively limited processing capabilities. But it’s a first step towards baking these kinds of functions into CV systems to allow for faster response times. In theory, robots might be able to function safely in closer quarters with humans (or perform more complex tasks) if they had better image processing algorithms that ran closer to the hardware and allowed machines to react more quickly.

It’s also interesting to see the further deepening of the Sony-Microsoft partnership. There’s no doubt that the two companies remain competitors in gaming, but outside of it, they’re getting downright chummy.

I’ve been impressed by AI’s ability to handle upscaling work in a lot of contexts, and self-driving cars continue to advance, but it isn’t clear when this kind of low-level edge processing integration will pay dividends for consumers. Companies that don’t make image sensors may continue to emphasize SoC-level processing techniques using onboard AI hardware engines rather than emphasizing how much of the workload can be shifted to the sensor. Baking AI capabilities into a camera sensor could also increase overall power consumption depending on how the chip functions, so that’ll undoubtedly also be a consideration for future product development.

There are no consumer applications or companies currently announced, but it’s a safe bet we’ll see the technology in ordinary hardware sooner rather than later, whether used for face detection or some type of augmented image processing.

Now Read:



from ExtremeTechExtremeTech https://www.extremetech.com/computing/310736-sony-microsoft-partner-to-deploy-ai-analytics-in-new-image-sensors

No comments:

Post a Comment