My practical test run with Jetson AGX Orin

0

What you will learn

  • What is the Jetson AGX Orin Development Kit?
  • What is JupyterLab and how does it relate to Jetson AGX Orin?
  • Why Bill Wong doesn’t follow orders.

Ok, I’ll start with the last bullet point above first. I would have written this review sooner if I could have got things working sooner. However, because I didn’t read the instructions and didn’t do things in the correct order, I had to go back and forth with tech support to find out why I wasn’t able to run the demos. Forgive me as the system comes out of the box and runs Ubuntu as mentioned in my first kit close up video.

After installing the JetPack SDK and DeepStream software, I was able to quickly go through all the demos and benchmarks. I’ll skip the latter as you can easily find the system specs and of course they match what the hardware is throwing out. It was one of the few things that actually made the fan run long enough to be noticed. It pushed the hardware to its limits, which most of the demos didn’t do.

The development kit is a complete system with the Jetson AGX Orin module at its core (Fig. 1). The extra storage and flash storage meant I didn’t have to worry about adding an NVMe M.2 card like I had on previous Jetson platforms. Adding one can be useful for more demanding applications. Also shown is the smaller Jetson-Orin module, which has less memory and power than its big brother. It works for lower-cost, lighter-weight applications that require slightly less processing power.

Assuming you didn’t make my mistake and moved on, you could finish testing the system in an afternoon.

software support

In order to fully use the system, you need to become familiar with JupyterLab. It’s possible to use all the libraries etc without this, but most of the demos and support is done with JupyterLab (Fig. 2). It is a notebook-style web-based interactive system that has been adopted by a number of AI developers and platforms.

The system is very nice – it can run things like command line scripts and display the results in the same browser window as the commands. It interacts well with other open source platforms such as Docker and Kubernetes, which is important for NVIDIA since these are used both in the cloud and on platforms such as Jetson AGX Orin. It contains some of the demos. There is also a multiuser version called JupyterHub.

I haven’t delved deeply into JupyterLab at this point, but you can run the scripts in a code block by simply typing Ctrl-Enter while the cursor is in the block. Likewise, a bar on the left can cause the block to expand or collapse, which is handy since some results can be pages long.

TAO: train, adapt and optimize

Just for the record, my mistake during setup was DeepStream not installing properly. After that I was able to look at the pre-trained models and use the TAO support (train, adapt and optimize). This runs containers either in the cloud or on the system to train or use a trained model. The example showing the trained model on the Jetson AGX Orin was able to identify multiple people moving across multiple video streams (Fig. 3).

Getting everything working was just a matter of going through the JupyterLab notebook for the various demos. The platform appeared to have ample headroom; the Linux load and thermal load (based on fan operation) was low. I’m not sure how to check the load on the GPU or the AI ​​accelerators, but I suspect that was low too. This would mean that a single chip could easily handle half a dozen cameras in a car, allowing for even more comprehensive analysis of the video streams used for basic person identification.

Of course there were no problems in choosing me (Fig. 4). I didn’t have a crowd or multiple cameras on hand, but I don’t doubt that the system would work just as well in those cases since I could feed it different video files.

Riva speech analysis

Beautiful images are not available for the Riva demonstration as it is audio. To see the gibberish I was trying to give, it’s just a matter of looking at the transcript of my utterances, which the Jetson AGX Orin and Riva software analyzed and presented to me with no problem.

The Riva SDK was developed to create speech applications. The model improvements alone have improved performance by more than a factor of 10. It matches or exceeds Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) support available in the cloud. I only used English, but pre-trained models for different languages ​​are available.

The previously mentioned TAO support has been shown in video streams, but the same approach works for other platforms like Riva as well. Likewise, it uses NVIDIA TensorRT optimizations and can be deployed with the NVIDIA Triton Inference Server if cloud solutions better suit your needs. For me, Jetson AGX Orin standalone support is more interesting. It could handle multiple audio streams, like in a car with multiple people and multiple microphones.

Follow

While the demos and benchmarks ran out of the system quickly, getting started using things like the Riva SDK required a lot more work — especially at the program level. The reason lies simply in the large number of interfaces and options, as well as the handling of TAO etc. The underlying technologies such as TensorRT, cuDNN or CUDA are not even included.

Still, this is where NVIDIA excels – documentation and library support is available and pretty good. Likewise, most of the software spans the platforms from Jetson’s low-end modules to the company’s high-end enterprise systems. These usually power the cloud, which has to do with cloud-based training.

Compared to the first Jetson platform I used, this ranks so high in terms of quality and out-of-the-box support that I’m amazed.

Share.

Comments are closed.