Where are you seeing the most demand for high-performance edge AI?
It is very exciting to see the increasing adoption of AI in edge applications in recent years and how the ecosystem is being built to further benefit from AI capabilities by implementing more sophisticated, high performance use cases. I see this happening across-the-board.
ADAS/AV applications are one prominent example. These rely more and more on multiple sensors and a significant portion of them use high resolution cameras, which are data intense. The applications are sensitive to latency (short reaction time) and quality (low false alarm and misdetection rates), which means that high accuracy and high FPS requirements are driving the need for high compute.
In the Security & Public Safety domain, I see high-performance AI especially necessary both in cameras and video processing devices. In cameras, customers are looking to process high-resolution video using high-accuracy algorithms. This translates into wider effective coverage in the camera (which means they can deploy fewer cameras to cover the same RoI), as well as reduction in false alerts (a better product producing better results). Customers are also looking to run multiple applications on the same stream (for example monitoring occupancy in parallel to activity detection), which also drives the demand for greater compute.
In addition to cameras, there are systems processing multiple video streams. These are, typically, either an added smart aggregation point in an existing “non-AI” camera deployment or new deployments with centralized architecture at the edge with the goal of reducing total cost of ownership. There are also cameras with multiple sensors for wider angles of view. In all these cases, the compute system needs to run multiple models on multiple camera streams.
Industrial applications are also interesting. In these, the requirement comes from the desire to increase system ROI by enabling low latency automation. Think of defect detection, for example. The detection application’s latency will determine the speed of the production line, while its accuracy will determine the level of process loss. Line speed and loss directly translate into production line output per unit of time.
What do you think is currently missing from the Edge AI technology ecosystem?
The domain of edge AI deployment is young but developing very quickly. As such, we encounter companies with a variety of software development approaches, each highlighting different software ecosystem needs. What we keep hearing about is the need for mature software tools, developer enablement frameworks and guidance in the emerging edge AI domain. It seems that these are some of the missing pieces which will help to speed up edge AI adoption.
For example, many companies chose to develop their applications based on their own proprietary framework. Their developers value quick and smooth implementation processes. In many cases, these companies are looking for a flexible edge platform with an easy porting process that will not require changes to their application. These companies usually value the flexibility of our Hailo Dataflow Compiler in compiling proprietary models and appreciate access to a rich Model Zoo that can serve as a good starting point for porting a proprietary application.
Another example is a significant group of companies who are newcomers to the AI domain. They are building their AI knowledge base and know-how, in many cases around a first specific AI use case. These companies are looking to quickly develop an application that meets their use case KPIs. They want to benefit from working with existing proven application frameworks, and quickly implement their own data, business logic or network (their key differentiators) on top of it. With such customers in mind, we developed our TAPPAS applications toolkit – an infrastructure designed to make it easy to develop and deploy high-performance edge applications, which serve as a foundation for customer applications.
Lastly, many companies working on edge AI applications are looking to benefit from experts with a broad view and domain know-how, to expedite their process. We, as a company that sees many edge AI use cases, are in a unique position to also advise our customers based on the knowledge we have accumulated.
What, if any, are the barriers to wide adoption of edge AI?
While the potential of AI applications to deliver value in many use cases and markets is clear, we are currently in the adoption process of edge AI. One of the factors that I see as key to fast adoption of AI by the market, is the availability of the right processing solution. To us, the right solution is one based on efficient AI hardware and flexible software tools for easy application development.
In many cases the “AI promise” – the AI application that fills the need or addresses the use case – exists but can only be demonstrated on servers. We met many companies that already had their “killer app” implemented or were very close to that, but realized that they need it to work on the edge. Processing hardware that would fit the device limitations (power, area) and meet the application’s compute requirements can often become an actual barrier. Without an efficient processor, companies are working hard to optimize algorithms and reduce compute requirements, which prolongs development time and limits desired system performance.
We at Hailo set out to remove this barrier. We unlock efficient compute performance at the edge and enable such companies to deploy applications without the exhausting optimization cycles or any compromise on product KPIs.
Going forward, the availability of high-performance, efficient edge processors such as the Hailo-8, will further fuel the adoption of AI. Having a larger “processing budget” can unlocks more AI value and new applications, making edge AI even more useful and common.
In short, I believe that the emergence of high-efficiency edge AI processing will remove a significant barrier and expect to see growing adoption and new exciting capabilities at the edge.
What makes Hailo different as an AI chip company?
From the very early days of the company, we decided to foster 2 key approaches that greatly affected our differentiation – a multidisciplinary approach and an open platform vision.
Unlike some chip companies that view hardware development as their main focus, we viewed AI processing as a multidisciplinary problem – spanning from hardware, through software tools, runtime software to AI application. We therefore decided that the solution should be built in a process that looks at the full scope of the problem. According to this approach, we focused on software and AI from day 1 and built a team of people who can (and like!) to work across different development domains at the same time. Our approach led us to design and co-develop the different pieces of the solution (hardware and software) – a key enabler in the development of a truly unique architecture delivering state-of-the-art performance.
The second key approach was our vision of a truly open platform, which we think is vital in accelerating edge AI adoption. To fulfill this, we have invested heavily in developing flexible software tools to enable a fully programmable and versatile solution. We have leveraged our ML expertise to develop app development tools to empower our customers and partners to run their own applications and differentiate, not forcing them into a rigid set of functions, tasks and applications.
In addition to our technology and product differentiation, I am very proud of our proven top notch execution capabilities. Our company was able to plan, develop and manufacture the best performing edge AI processor from scratch (an entirely new processor architecture) and in record time. The ability to deliver a highly innovative production-grade processor with a comprehensive software suite is an important differentiator, as it allows our unique technology to meet real world use cases in the market fast.
What are the most important drivers of startup success?
In 2021, Hailo celebrated 4 years since its inception. It’s amazing to think back at the incredible journey we have had and even astonishing to see where it got us. This would not be possible without a hawk-like focus on our unique value as a company, the relationships we built with our customers and partners and the amazing team that has now grown to almost 200!
First, at any and every stage of the company, I think it is important to know where you deliver unique value. There will be distractions, things that will tempt you to pivot or turn, so you must keep reminding yourself and work on it. Work to sharpen your understanding of where unique value lies, focus your precious resources there and understand what else is required to drive business, to find partners to complement you.
The latter is beyond important. You can’t do it alone, without close collaboration with customers and partners. It is vital to always seek to learn from your customers about their goals and pain points. Treat your relationship with them as an investment. Do it from early on and maintain it, not only when you want to make a sale. This belief has helped us build strategic, mutually-beneficial relations and good understanding of the market and our product fit (both absolutely invaluable for a startup) in the process.
Lastly, there’s our team. Everyone says it, but that is because it is true. Make sure you build an A-team! What does a winning team look like? They are best-in-class professionals with an execution-oriented spirit and ability to drive results while working in collaboration. No less important than that, they are excited about the challenge and the opportunities we are facing.
Saying goodbye to 2021 and looking forward to a new year, I could not be prouder of the progress we have made. I cannot wait to continue working with our awesome team to help companies across the globe unlock value as part of this fascinating AI revolution.
And on that note, I’d like to wish you all happy holidays and a wonderful 2022, filled with great opportunities, companionship, and success!