Here are some of the exhibitors I had the opportunity to meet on the second day of this year’s show.
In the Eureka Park, EAIGLE presented an AI-based, comprehensive “all-in-one” kiosk with contactless visitor management, automated wellness screening, vaccine verification, people counting, capacity management and crowd thermal screening.
Essence Security, one of the largest global alarm system solution providers, won two CES Innovation Awards for its MyShield 5G-connected smoke generator and Umbrella 5G-connected personal security alert device, which connects directly to Public Safety Answer Points (PSAPs) and central. can manufacture station monitoring providers. Be sure to read this article by Security business Magazine Editor-in-Chief Paul Rothman to learn more about Essence’s umbrella solution and how it is expanding to include commercial enterprise security applications.
If you drive an Audi S7 Sportback, a BMW M760Li xDrive or a Cadillac Escalade 2021, you are already using the uncooled IR sensors from InfiRay for driver-assisted guidance. At CES, the company unveiled the first uncooled 8 μm thermal camera sensor, which could have far-reaching industry potential for things like body-worn cameras.
For the security and identification market, Isorg demonstrated its Fingerprint-on-Display (FoD) modules for improved fingerprint smartphone authentication and improved dry finger performance in harsh conditions. The sensor modules support FAP30 and FAP 60; touch a smartphone display with up to four fingers at the same time. Four-finger sensors scan four fingers on each hand followed by the thumb (4-4-2). Each 10-pressure profile produces a full record with no stitching, making these scanners the FBI’s fastest option and preferred choice for registration. Four-finger scans also provide greater accuracy in identification processes.
Isorg’s next step is to present these innovations as a trusted partner for smartphone and security solution providers involved in mobile banking, border control, first responders and electronic access control.
CES 2022 became a living catalog of AI accelerators, systems on chip (SoCs) and for some, like Femtosense, the introduction of an emerging category – the hyper-efficient AI processor for the embedded edge, also known as the Sparse Processing Unit (SPU) .
When coding for ADAS was used in luxury vehicles about six years ago to help parking owners save lives by braking early and dodging people when reversing and detecting approaching objects faster than the driver’s reaction time, the focus seemed on Completing stable projects, even if it meant millions of lines of code. Today’s vehicles may require over 100 million lines, a complex structure and a dense neural network. A higher density means a higher computing power, which leads to costs, heat, electricity and ultimately less mileage for an electric vehicle.
For an operator trying to hear whether the call was an unarmed domestic argument or gun violence, an SPU with an efficient code that recognizes multiple people in the background can mean the difference between sending the wrong response team or sending the wrong response team Mean saving lives.
Femtosense AI was able to demonstrate speech recognition with its SPU in the midst of the extremely noisy exhibition environment, which led to the reproduction of unchanged speech and background noise cut out in real time. Traditional noise cancellation technologies are still sold in consumer electronics today and are closer to sound canceling than to preserving audio frequencies.
In AI-based video processing with objects moving in different directions in burning complex buildings, such as during a riot, video evidence may not be accurately reproduced. With the Femtosense AI SPU, parameter and activation parity can reduce the power requirement by 100 times and the memory consumption by 10 times.
In addition to using highly efficient neural network processors, Femtosense AI provides everything needed to transition from neural network model to SPU, tasks that are normally performed by a solution provider who may not be that familiar with processor development.
Visual Behavior works with companies like Waymo and NVIDIA to develop types of robotic awareness, including automated guided vehicles (AGV), advanced driver assistance systems (ADAS), and unmanned aerial vehicles (UAVs).
Visual behavior employs a remarkable new paradigm that focuses on a scene representation rather than the sensors. This is an internal, permanent, symbolic representation of the world that is constantly updated. Its core technology is an artificial visual cortex, an AI-supported software for scene understanding.
Use cases include improved driver safety in poor environmental conditions, better avoidance of multiple obstacles, and even object tracking between similar objects.
About the author:
Steve Surfaro is chairman of the Public Safety Working Group of the Security Industry Association (SIA) and has over 30 years of experience in the security industry. He is a specialist in smart cities and buildings, cybersecurity, forensic video, data science, command center design and first responder technologies. Follow him on Twitter @stevesurf.